Working on the back porch with the laptop, I often have the wild birds coming around, but this sweet little female cowbird seems more friendly than most:
She must have been tamed elsewhere, because she is not shy at all unless this is somehow inherent in this species. I doubt it. The laptop was actually on my lap when I took this picture.
Unfortunately, cowbirds are terrible birds: they are brood parasites, and they don't build nests for themselves. Instead, the female lays eggs in a nest of another kind of bird (usually in the passerine family) and the hijacked parents do all the work. Baby cowbirds grow very quickly and outcompete their nestmates for attention and food from the parents, and it's unknown why the parents don't realize that this stranger bird is not their own.
Today, ZDNET ran a piece discussing 30 years of Ethernet, mainly including an interview with the author. Dr. Bob Metcalfe invented this technology we know and love today while at MIT in 1973, stared 3Com, and was a technology pundit for a long time. He's a fascinating, colorful character who has contributed much to our technical world.
And I cannot help but take this opportunity to reproduce something I read in Sol Libes' "Bytelines" column in Byte Magazine, written in March 1982.
A report issued by Strategic Incorporated, a market-research firm in San Jose, California, predicts that Xerox Corporation's Ethernet local-area network will be a total failure within two years. According to Strategic's president, Michael Killen, "Xerox is headed for the worst failure in the company's history." He believes that Xerox lacks technological and price advantages, sales force, and customers interested in buying large systems...This just seemed significant when I read it, enough to save it.
Michael Killen is still around as principal of Killen & Associates.
Today I received the June 2003 issue of Linux Mazagine, and inside appears my Compile Time column on using assertions to track down errors and make for more robust code. Writing software is hard, and the sooner an error is caught the easier it is to fix. By using the assert() mechanism religiously, it's possible to eliminate entire classes of programming errors.
While reviewing C code from a customer today, I ran across the incorrect use of the access(2) function call. This is very commonly - but wrongly - used as a kind of "does the file exist?" function, but that's positively not what it was designed for. It's meant to be used by setuid/setgid programs to see if the underlying user/group has permissions on the given file, avoiding ugly bit-fiddling.
I have run across this mistake time and time again, and even in non-setuid programs it can become a factor: if the program using access() incorrectly is ever part of a setuid system (say, used in a shell script called from a setuid program), stuff will just break.
In early 1988 I'd had enough of this on a project so I posted my "access flame" to a handful of Usenet groups (comp.lang.c, comp.unix.questions, and comp.unix.wizards), and most of my technical points have merit 15 years later.
Blast from the past: access-flame.txt from 1 March 1988
I've been using the Postfix mail system for some time, and a handful of customers are using it as a front end (with SpamAssassin) to MS Exchange. In these cases, the Postfix system only does relay, not final delivery, and the usual way to configure relay is to simply pass everything that comes in the door for the domain to the mail server for that domain.
The problem arises for invalid addresses (old users, typos, etc.). Postfix accepts the mail from the outside, but when it tries to deliver it to Exchange, the recipient is rejected as unknown. This puts the burden on Postfix to deliver a bounce to the sender (who just as often as not is unknown).
Here it behooves us to somehow teach Postfix about the valid users inside so it won't accept this bogus mail in the first place: no need to deliver any bounces. it can be maintained by hand, but this only works for very small and static operations, and even then it gets tiring. Better is to automate it.
With my friend Steve Gardiner we have built a system that automatically exports the user email list from the Exchange directory, conveys it to the Postfix mail system, and rebuilds the relay_recipients file automatically. Running this from WinAt (the command scheduler), it becomes a fully unattended system requiring very little ongoing maintenance.
I've written a Tech Tip to describe the process:
I've had it with HP LaserJets and their poor quality: they can't even stand being thrown four feet onto hard pavement without rendering themselves useless. They'll never make it.
---
Sigh, last night I tripped while carrying my HP LaserJet 4100N laser printer to the car from an event I was helping with: the paper tray shifted, I overcompensated, and I sent the thing flying to the concrete and made hamburger out of my hand in the process. The printer was completely destroyed - the frame was bent and key plastic parts were broken. It's now sitting in parts in my trash can.
The only upside is that the EIO (network) module was fine, as was the RAM, so I was able to replace my $1700 printer with a $999 HP 4200 unit and reuse a couple of the parts to get a clearly superior printer. 35 pages per minute doesn't suck.
I may hold off on the pavement test for a while.
The perils of publication delays: yesterday I got my June 2003 copy of Linux Journal, and it contained a review of SCO Linux 4. The review itself was entirely appropriate, but the timing was terrible. Just a few days before, SCO launched its attack on Linux, and in the process stopped sales of SCO Linux 4. This pretty much obviated the whole review.
None of this is Linux Journal's fault - of course - but it never hurts my feelings to see a competitor take a hit.
I write for Linux Magazine :-)
I build virtually all my "important" packages - defined as anything that touches the internet - directly from source obtained from the package's home site, and I have come across a system that works really well for me.
and then edit the file to start it with./configure --help > ../configure-proftpd
Now I have a ready reference to tune the options. When ready to build the thing, from inside the source directory I just runexec ./configure --whatever --something --etc
and it does what's required.../configure-proftpd
apt-get and rpm are for wimps :-)
I've used "wget" for a long time to fetch files from a remote web server, but I've not known of a program that would go the other direction. In the past customers have solved this with a response file:
ftp ftp.example.com < inputs
where "inputs" contained the commands to exeucte. This is a terrible mechanism because there is no error feedback - it's a bad way to script it.
My solution was to use expect to interact with the FTP client, but this requires a lot of work to test and find the exceptional cases, though it's much better than the response file method.
Recently I decided to do it properly: I used the Net::FTP perl module and creates a simple ftpput wrapper. This has been in use at several customers for some time with great success. It's not anywhere nearly as robust as wget, but it's served us very well.
I'm writing an article on dealing with C compiler warnings, and some of them that deal with signed/unsigned conflicts are pretty hard to follow and understand. When dealing with any binary operator, the compiler normalizes the types of both operands until they are the same. These are called "The Usual Conversions".
The rules vary depending on whether this is an ISO/ANSI compiler or a "traditional" one, and sometimes on the sizes of the integral data types, and it's hard to keep them straight without being an expert.
Being an expert :-), I wrote a small web-based calculator that takes the two data types and some of the compiler parameters, and it shows the results of the unary conversions applied to each operand independently, then the binary conversion that normalizes the two. The final type is always the same.
Unixwiz.net "Usual C Conversions" calculator