Systems Engineer trapped on Earth...

Tech Dominates Top Brands

In a testament to the impact technology has had on our lives and the world we live in, research company Millward Brown released a report on the top 100 brands yesterday, with the top four spots taken by technology companies. Google ranked number one, followed by IBM, Apple, and Microsoft, all literally household names cemented into our culture. The top twenty rounded out with other companies such as Oracle, Verizon, Hewlett-Packard, and RIM, makers of the venerable Blackberry. In a sign of the times, it is clear computing and wireless technologies have become cornerstones of daily business and personal life.  A few Chinese technology companies also appear on the list, including carrier China Mobile, signifying the value of the rapidly growing Asian market.

Clearly this trend will only continue, with our work and homes dominated by a growing reliance on technology. Only time will tell if this is a good or bad thing of course, with technology providing everything from communication and news, mobile computing , business applications, and ways to stay instantly connected to friends, family, and even work. What will the future hold? What will be the next great technological leap? Will Google take over the world? Will there ever be anything that trumps the mainstay Windows operating system?  What will the next killer app be, and wouldn’t you like to be the one that turns it into a household name? Post your thoughts and ideas! Feed the ether!

The top ten is below and you can view the full list here:

Brand Value
No. 1. Google $114.2 billion
No. 2. IBM $86.3 billion
No. 3. Apple $83.3 billion
No. 4. Microsoft $76.1 billion
No. 5. Coca-Cola $67.3 billion
No. 6. McDonald’s $66.0 billion
No. 7. MarlboroAltria $57.0 billion
No. 8. China Mobile $52.6 billion
No. 9. General Electric $45.0 billion
No. 10. Vodafone $44.4 billion

A Girl Called Amanda

Finally, my second piece of the Case of the Backup Lemon. This one is about a piece of open source software that makes a handy little backup utility on the right equipment. As mentioned in Part 1 of this story,  I inherited a decent machine and a horrible backup application at my new job as a System Admin. Faced with a vendor that had pretty much chucked us to the wind, I did some research and found Amanda by Zmanda. I had a tight budget and the need for critical data backup as well as a viable disaster recovery plan in a reasonable amount of time, it was exactly what I was looking for – It was free and ran on Linux. I had a nice machine with a 1.5 TB RAID array to run it on too w00t! (yes, I’m dropping the word of the year here…it’s all about the page rankings muhahahah.)

My first step was prepping the system, a solid yet older machine with an Intel Celeron 2.5ghz chip, 512MB of RAM, and a 1.5TB RAID 5 SCSI array. I installed Fedora Core 7 and configured it, which is like, way beyond the scope of this document. When it was ready, Zmanda has an excellent tutorial called “The 15 Min Backup Solution” you can check out here http://www.zmanda.com/quick-backup-setup.html .

Following this simple guide, I had Amanda up and running- although it took a lot more than 15 mins. There were a few hurdles, including initial problems with contacting servers in other subnets, adjusting the firewall for the ports Amanda tended to use and a couple of other things. All in all, things one would expect to see when introduced to your own network environment, but the configuration tends to need tweaking when faced with problems. The user forums were a help too when connections between the server and clients kept dropping, which turned out to be a configuration issue.

The initial server configuration is somewhat simple although the config file is fairly large with a ton of options. This can be a bit overwhelming to the novice user, but Nix dogs should have no problems. You can set it to backup to tape or a holding disk, which can be any piece of storage the system can see. In my case I of course used the 1.5TB array. With a small amount of it being used by the Linux OS, I had plenty of room on it.

Configuration of the client was a simple package install, and then setting up configuration parameters. The server and clients both need to be configured with a special amandahosts file and a few regular host files, as well as a few other system and config settings. Following the guide is the best bet to success though. You also setup a disklist on the server, which is a master list of all the servers/directories you want to backup. Backing up other Linux machines works well since Amanda will use the native client installed on the target machine.

When faced with backing up Windows machines you have two options. One is just to share the drive or directory on the Winblowz box and then give a backup service account admin rights over it. This is limiting a bit because it won’t backup open or system files. You could get around that by backing up a Shadow Copy volume though, another thing I’ve been meaning to implement. The other Windows option is a bit more elaborate involving a client and Cygwin. I decided not to bother with that part since a large part of my Windows data was static and I didn’t want to run Cygwin on every Windows system I wanted to backup.

Amanda can be configured to email reports of your backup jobs, so I have it set to send me the daily reports as soon as they’re done in the wee hours, that way when I get to my desk in the morning the report is waiting with all the shininess of a new email message. All that’s needed to run the backups is a cron job on the server that kicks off the amdump program. This allowed me to get some reliable backups on a zero buget, which is what this article has been all about. You can learn a whole lot more over at the Zmanda site, however I’ll soon be sunsetting Amanda possibly due to a new backup system and tape drive in our 2008 budget. It’s one kick ass free backup solution though.

The Case of the Backup Lemon

Lemon We’ve all had it happen. Start a new sysadmin or IT job and you inherit some really horrible decisions made in the past. So was the case with me, a new Security & System Admin for a small marketing company. One of the first things my new boss tasked me with was managing the backup system. Assuming it would be relatively simple, I delved into learning about it. The unit was a $14k hardware/software package built by a vendor who shall remain nameless. Called the Data Protection Unit, it shipped with over 1.5 TB of space. So far so good, right?

I soon found to my horror that the software that came with the hardware was the worst backup app I’d ever seen. It ran on Red Hat 2.x.x, ancient considering RHLE 5 was recently released. Not only that, it used Winsock of all things to connect to Windows clients, which seemed archaic and I wondered how old their developers were. I pictured wizened old coders sitting in front of puke green terminal screens jockeying for time on the mainframe and alpha testing on Novell.

Still, it was actively backing up the then Windows 2000 office network and some of the Linux production network just fine. To make a long story short, this box never produced the same problem consistently. It was always something new; restores would be problematic, 30 GBs of a single disk took THREE hours to do a master backup, not to mention their support department was a joke. This system was also agent-based, so I had to throw a cludgy, slow, similarly-coded program on all machines. I marveled at the brainpower behind spending $14k for it. Throw on top of that a very non-intuitive interface and the whole thing made me want to recreate an Office Space scene complete with the rap music and baseball bat. The closest thing to a secluded field downtown was the baseball stadium across the street though, so I decided against that. The worst part was, we had nothing in the budget for a new system from a different vendor, so I was stuck with what I had for the time being.

For months I wrestled with getting the thing working reliably enough to backup everything I needed, mentally counting the days I didn’t have backups for. For months I went in circles with their support engineers, letting them remote in and fix whatever new problem arose. In the end it always seemed like a band-aid was put on there until the next thing went wrong. I would repeatedly ask them to email me as I was often not at my desk, and they would call me anyway creating a vicious cycle of phone tag that could eat up an entire business week. I began to complain to both the local rep that sold us the unit, and to supervisors higher up in the support department. After quite some time, I delivered them an ultimatum outlining what we would accept as solutions to this ongoing nightmare which included a full return of the unit, a swap, or a refund. They wouldn’t even provide me with any copy of their return policies no matter how often I asked. I finally received a call from one of their sales executives, touting how they had other clients backing up TBs of space with no problems using the same unit. He finally decided he was bringing himself and one of their best engineers to us to check everything out themselves.

The day went well, and everything was amicable. We showed them our small infrastructure, outlined the network a little, and then I sat down with the engineer to go over some things on the system. At the end of the day was a short meeting in my bosses’ office with all the players, including the local sales rep. In her defense, she was an independent, and not affiliated with the backup system vendors and pretty much had more to lose herself. The VP and my boss did most of the talking, which was quite a lot. Sort of expected out of the sales pukes, no? Anyway, my boss brought up the fact that the unit could very well be a “lemon” and the idea of a new unit was bounced around. The sales exec wasn’t opposed to the idea. Everyone smiled, shook hands, and that was the end of the field trip. The only downside to the whole meeting? Our return options were “limited” we were told, since the purchase was more than a year ago. Now comes the shocker.

The next morning, I sit at my desk and discover the backups all failed.

At that point, another round began with their support department. Their verdict; make sure the swappable drives were properly seated. If they were fine a technician would come on site to make sure nothing was loose internally. This was a system that hadn’t moved in at least 6 months or more, that ran faithfully as the hardware never crashed. To it’s credit the system itself never experienced any kinds of shutdown failure. The drives themselves hadn’t been moved in months until I reseated them as they requested.

At that point we decided that was enough and didn’t renew the support contract, especially when our request to swap for a different unit was completely ignored. We ceased communication with them, and to this day their support has never followed up with us.

I think Shakespeare once said “Oh ye, so sad and comical.” but maybe not. But now what the hell was I going to do for a backup system? I certainly didn’t want to use that unit. On the Windows side I did the best I could with the native NT Backup utility by writing them daily to a separate file server for a while. I eventually came across what has saved us. It allows me to backup all of the machines in my mixed environment of Macs, Linux, and Windows and won’t cost me anything. And guess what boys and girls…it’s open source!

To be continued in A Girl Named Amanda…

Using DNS Zones To Distinguish Enviroments

Many system and network administrators are responsible for supporting development, QA, and production systems for sites and web based applications. In simplest form the code/app is moved from one to the other as it progresses through the lifecycle. The systems could be located on different networks, are usually housed on different servers, and can even be geographically dispersed. They can also be extremely large with some shops breaking out environments into UAT, regression, and even business process testing as well. This easily produces sprawl across the infrastructure and can make administration difficult at times.

But how do you know which version of the site/app you’re looking at is if you don’t have a way of identifying it? Worst case scenario, you could accidentally change code for a QA site when it really should have been the dev site if you’re not careful. Using different URLs is a typical choice but that can result in wild variations that are just plain ugly, disorganized, and sometimes end up being ridiculously long or making no sense.

Since users depend on DNS to resolve URLs, a great trick to differentiate between these environments is implementing zones to distinguish one from the other. You can create zones for each part of the process, allowing development and other teams to use the same URL for every environment/system, avoiding different and confusing names. This makes things trivial to identify, for example;

site.lab – Lab
site.dev – Development URL
site.qa – QA
site.uat – UAT
site.reg – Regression
site.com – Production

Match the URLs on the server side and everything is nice and neat. The possibilities are endless and keeps a tidy, organized, uniform URL scheme across the infrastructure that easily identifies things and takes very little time or effort. You can take this a step further with your database designs on the backend, use Active Directory domains for added features and security, and even match your system names. Granted there are plenty of other ways to accomplish this, but the zone method doesn’t require a lot of work and is visually simple for the end user.

Auditing By The Seat of Your Pants

Whenever you’re stuck in a small shop with a limited budget, it can be pretty hard to find a good, inexpensive application that can do five things:

Port scanning
Vulnerability scanning
Some kind of patch level detection
Wrap everything up into reporting that can show all the results by machine.
Doesn’t cost an arm, leg, and your first born.

With little to no budget, my auditing tools are varied and I have to cut and paste most of their results into a single report by hand. I’ve gotten pretty nifty with the report formats using color coded Excel sheets, and I get to flex my writing skills but the manual work involved really is frustrating. However, using a combo of the usual free tools (Nessus, Nmap, Microsoft Baseline Security Analyzer, Metaspolit, etc.), I’ve managed to audit a small network of 100+ IPs and 5 subnets in around four to five days, complete with the reports. This also includes external auditing of our two public networks. I still wish I had a free or inexpensive tool that does a lot of what I’m already doing manually, especially bringing in all of the results into a single report complete with an executive summary.

Now, I could be lazy and just compile all the output these tools already generate and call that a “report”, but I’m the creative type and believe in clear documentation that can translate to both non-technical staff and IT staff. They should have a uniform look, because Nessus’ output format is an HTML file and Nmaps’ is a text or XML file. Putting them all together into a printed out clump just looks sloppy, and I don’t go for sloppy with documentation.

There are plenty that do that job, but all of them are pretty hefty pricewise, which leaves those with a low budget for such items in the crunch. There is business opportunity in this area, so you would think this market would have a bit more variety. Changes in the security landscape are pushing it in that direction though, as security and compliance are becoming concerns to even some small businesses. If I was a .NET developer, I think I’d start writing something that did what I wanted. Alas, I’m not, but if any of them are out there lurking, get to coding!

ISPs Cry, Consumers Lose

Lately ISPs have been claiming the high usage of bandwidth is forcing them to take steps to manage the traffic on their networks. A few have taken some controversial steps like Comcast, who have caused an uproar over capping file transfers on peer to peer networks like Bit Torrent. There has also been some cases where Comcast has dropped a connection altogether because of some capped limit that they will not disclose. They even modified their terms of use without even telling their customers. Time Warner is currently testing a tiered pricing model in Texas. Tiered usage isn’t that big a concern to me, since I do not use a high amount of bandwidth in my daily online activities from home, despite the fact that I’m a system administrator who connects to the office network via VPN quite frequently. So if my ISP decides to roll out tiered pricing I really don’t care.

My only problem with all of this is that current U.S. bandwidth speeds pale in comparison to other countries around the world. If users hogging bandwidth is causing ISPs to consider these things to handle the loads, then why aren’t they increasing bandwidth amounts to the same levels as the rest of the world? I remember reading a report that the U.S. is currently ranked dismally low on the list, so if bandwidth is an issue, why aren’t they just increasing the amount available to consumers instead of trying to add all of this control to what users do?? It doesn’t really make any sense.

This is probably because the ISPs are making buckets of money charging you for very little in the way of speed, and they want to keep it that way. Most ISPs contacted by Computerworld for a story on this all claimed their networks were robust enough to handle the loads….yet here they are complaining that high bandwidth users are causing problems. Rather than asking them why bandwidth speeds aren’t comparable to the rest of the world, which would likely ease any problems the ISPs are crying about, the media is simply helping them spin the idea that there must be caps, controls, or tiered levels.

This is just plain ridiculous, and users in America should instead be demanding better speeds that are comparable with other offerings around the world. Even South Korea has us beat by a huge margin, as well as other countries in comparison. Consumers on that side of the pond have more bandwidth then they know what to do with, and the United States continues to slip from it’s spot as a world leader in technology and innovation. Feel free to post your piece on this one.

Just to give you an idea, here’s a short list of median speeds around the world in megabits per second. These tend to be higher in some cases. I’ve read reports of Japan having upwards of 100Mbits/sec. A 40Mbit/sec connection is also dirt cheap there:

Japan ||||||||||||||||||||||||||||||| 61 Mbits/sec
S. Korea ||||||||||||||||||||||| 46
Finland |||||||||| 21
Sweden ||||||||| 18
Canada |||| 8
U.S. || 2

Computerworld editor Preston Gralla has been talking about this on his blog for some time, check out some of the info there @: Another anemic showing for U.S. broadband

So why should consumers get shafted by the ISPs just because some movie/music/software freaks/pirates are sucking up bandwidth? The actions of a few are going to affect the speeds of everyone, and that just isn’t fair to consumers who already pay too much for too little. There is also the fact that legit video viewing is beginning to take up a large part of the internet, with YouTube gobbling up nearly 10% of all traffic. This is a natural outgrowth of the net, but ISPs are not even interested in increasing capacity. It’s all about the bottom line and how much they can get from you for the paltry speeds they provide and that isn’t going to change unless the FCC tracks true usage, which it refuses to do, and the government starts taking steps to ensure better broadband access.

Pop Culture And A.I.

ai_64  Early history
The idea of automated beings able to think for themselves goes all the way back to ancient times, when stories of Greek and Western mythology told of living statues and dragon’s teeth coming to life as sentient or subservient beings. Some of the earliest ideas about robots in the mechanical sense began when the Industrial Revolution spawned a technological tidal wave, prompting people to realize automated machines could be possible. However, from the beginning there has been a negative aspect to the idea of free-thinking artificial life in literature and movies, which has led to perhaps an apathy in modern depictions.

Credit Frankenstein
One of the earliest novels to be considered science fiction was the classic Frankenstein by Mary Shelley. Published in 1818, the story tells of a man made of mechanical and human parts brought to life by a mad scientist. Although not possessing many of the characteristics of modern robots and A.I., the story immediately set the tone for the way artificial life was depicted in literary and pop culture. As in Frankenstein, some of the earliest stories and films portrayed them as things to fear and destroy. This theme has become synonymous with artificial life and intelligence, and is often used in film. Man vs. Machine is a literary theme taught in English and creative writing classes the world over.

Modern film history is full of notable and famous evil or flawed artificial intelligences/robots. The robots in 1927’s Metropolis, one of the earliest depictions of real robots in film, told a story about robots in a factory turning on their human masters. Gort, the menacing giant mechanical being in 1951’s The Day The Earth Stood Still nearly killed his master’s girlfriend before she could spit out the command to make him stop. HAL 9000, the A.I. turned murderous in 2001: A Space Odyssey, became the first electronic schizophrenic when he murdered his human coworkers because of a logic flaw. The T-800 cyborg from The Terminator wanted only the destruction of human Sara Connor. JOSHUA from WarGames tried to play global thermonuclear war with the world; for real. And who could forget the frightening red droid Maximillian in Disney’s original Black Hole? That was one of the robots that actually scared me when I saw him on screen as a kid.

Probably one of the best modern television shows about Man vs. Machine is Battlestar Galactica, created in 2003 and based off an earlier series. In it the evil Cylon robots have launched an all out war of genocide against their human makers but they are also religious, capable of love, and even procreation, evolving into something more than just machines. An overriding technophobia is depicted on the show as well, in that the crew of the Galactica are wary of computer networks and high technology that could be usurped or sabotaged by the enemy.

The earliest literature about modern robots appeared in the 1920s, but the idea of artificial life goes back even further. Homer wrote about maidens made of gold in the Iliad, and Steam Man of the Prairies, written in 1865, tells about a mechanical man powered by steam. Issac Asimov made a huge impact on it’s depiction in his Robots series of short stories and novels beginning in the early 1950s, where the word robotics was first used in print. Although he didn’t vilify A.I. he did explore several moral and philosophical issues in his stories, which usually centered around the Three Laws of Robotics and a robots ability to obey them. The first one warns that a robot should never allow a human to come to harm through action or inaction. It’s this primary moral issue that is the theme of many of Asimov’s stories, and some of them included robots turning on their human overseers.

In literature, evil A.I. has also become part of other genres outside of science fiction, such as the novel The Dark Tower by Stephen King, in which an A.I. called Blaine the Mono is met by the main characters. He controls the rail system and the city of Lud. Although not belligerent, he is not that helpful and the heroes must figure out how he works in order to control him. There is also the novel Demon Seed, which is a horror story involving an A.I. that attempts to take over the world and impregnate a human female. It was also later made into a film that ended with the birth of the baby. In Do Androids Dream of Electric Sheep by Philip K. Dick, androids are not permitted on Earth and the main character is a bounty hunter charged with eliminating them. The film Blade Runner starring Harrison Ford was loosely based on it.

Geeks love it
Those who aren’t bothered by the possibility of true A.I. are the scientists who are currently studying and working on them. People who tend to embrace or work heavily with technology are also more open to the idea of intelligent systems. It’s the general public, who’s only real association with artificial intelligence is what they see in films and books, that is likely to have the most negative perception. However if popular culture has made a menace of the idea in the minds of common people, how will they be able to accept true A.I. when and if it does appear?

It’s possible that we may see early robots and A.I. purposefully made only as smart as say, a five year old child, so that some form of control can be maintained. Certainly when the age of robots begins to appear and they’re integrated into daily life, different aspects of human society are not going to warm up to the idea at first. Asimov wrote a story about that very thing in That Thou Art Mindful Of Him in which the main manufacturer of robots tries to introduce them on Earth.

So how are we going to be able to create true A.I. if we are afraid of the outcome? Will this hold us back from reaching that goal? Will we only be able to go so far because we fear the consequences? If the public’s perception of advanced A.I. makes them technophobic, how will it ever be accepted?

On A Positive Note
There’s much to be said about the positive side to the debate as well. Many benign A.I. and robots made famous in films and literature also abound. Often these take on the form of friendly entities that assist their human masters. The droids from the legendary Star Wars series, R2-D2 and C-3PO, come to mind. The saga’s creator George Lucas himself always claimed that R2 was the biggest hero of his 6 films, pointing out that he is always there to rescue someone or save the day. At the same time, we were introduced to an evil cyborg in Episode III with the character General Grievous, which points out a curious trend for including a bad robot/A.I. if there is a good one.

The 1986 film Short Circuit introduced Johnny Five, the robot that comes to life after being struck by lightning. He is probably one of the most memorable friendly robots in pop culture, as most can’t forget his signature line “Number Five is alive!!” and his adventures trying to stay that way despite an army hot on his trail bent on his destruction.

In Asimov’s novel The Caves of Steel, a robot named R. Daneel Olivaw assists a human detective in solving a murder. The two characters would become the writer’s favorite antagonists. The B-9 robot from Lost In Space faithfully warned the Robinson family of danger and protected them. Marvin the Paranoid Android in the novel Hitchikers Guide to the Galaxy helped his friends, although annoyed them with his depressive personality. The book also had other notable A.I. like Eddie , the starship Heart of Gold’s computer , and Deep Thought, the supercomputer that frustrated a galaxy with his enigmatic answer to life, the universe, and everything – 42.

Television brought us notables like Max Headroom, the A.I. that roamed a futuristic television network. Lieutenant Data on Star Trek: The Next Generation was one of the first androids in Trekdom. And who could forget Ziggy, the supercomputer that aided Dr. Sam Beckett on Quantum Leap.

Arguably, as far as entertainment value goes the evil ones are easier to remember. But although the bad guys in movies are always more fun because of their deviousness and the fact that they’re not real, a negative message has been spread throughout culture for decades. The bright side of this is that there will always be some people willing to push the boundaries of fear, imagination, and impossibility. However, both professionals and hobbyists in the industry feel that A.I.’s image is just fine and that the negative perceptions in pop culture haven’t harmed it.

AI Will Evolve
“Hollywood is the greatest advertisement for A.I. and robotics in history. The problem is with academic scientists and engineers not living up to the public’s expectations. A system like ALICE, which has won the award for coming closest to passing the Turing Test, could never be built in a University research lab. The pressure to be politically correct and to confine one’s research to the areas approved by the establishment, not to mention the scale in years and manpower, would prohibit any kind of believable A.I.. from emerging from a University, or any government-funded research lab.” says Dr. Rich Wallace, inventor of ALICE /AIML and chairman of the ALICE A.I. Foundation.

“One of the biggest obstacles to human acceptance of chat robots is suspension of disbelief. A child can have more fun with a bot than an adult, because the kid will forgive the bot when it breaks down and gives an incorrect answer. Adults, especially highly educated ones, will tend to be more critical of the bot’s mistakes. There is actually a tension between part of people who want bots to be like superintelligent machines, always accurate, truthful, and precise; versus the part of us that wants robots to be more human, which means something like the opposite: sloppy, lying, funny, hypnotic, charismatic, and maybe sometimes truthful and accurate. Robots might be telling us to get over ourselves.”

The march of progress may be a factor as well, since like all technology A.I. research continually moves forward. And similar to other technological advances, it will do so without regard to the public’s perception of it. Many of the most famous inventions of our time were once looked upon as useless, dangerous, or just plain unacceptable.

“Moreover, technology has a kind of determinism, or at least a natural course of evolution, that appears to skip over the minds of individual inventors, despite their egos and individual passions. So I don’t think you could do much to help or hurt the advancement of anything by manipulating public perception, not for very long anyway.” Dr. Wallace says.

A recent Botworld poll showed an overwhelming 73% did not think pop culture has hurt the image of A.I. Botmasters and users did feel that there were lots of negative images in pop culture, but that it wouldn’t stop advancement even though the majority agree that fear could hurt research. Some also feel that A.I. will only be as evil as it’s creator. Hmmm, bots don’t kill people, people kill people?

“I just think most do not know about A.I. so the general masses would take pop culture as truth and be afraid of it. But there are others that even though they do not know all about A.I. can take it as it is and realize that it is still Hollywood and TV which does not always give a true, decisive picture of what is reality. ” says Lili., an avid bot user and chairwoman of the Marzbot fan club. “And there are the movies that make it seem like robots will take over, like those with the ability to update themselves and learn. Although that can be a little scary, if WE are the smarter ones, that should never happen.”

“The idea of futuristic robots taking over the earth has been a topic of converse for decades. I think the public has learned that any pop culture which attempts to give A.I. a negative image is purely fictional, and that A.I. can be used for things like marketing, learning, and household-help.” says Darkmonkey, creator of the popular White Warrior series of bot templates.

So it would seem many don’t think a bad image will hurt the advancement of AI, but everyone agrees there is a negativity that has been implanted in public consciousness. Perhaps though, this will help us be more careful of all the consequences we know can happen, thanks mostly to the enduring image of artificial intelligence pop culture has given us.

Bots, A Look At Practical Apps

The definition of a robot or bot according to the dictionary is: A mechanical device that sometimes resembles a human and is capable of performing a variety of often complex human tasks on command or by being programmed in advance.

Today there are millions of bots or robots in existence, doing thousands of different jobs daily. There are many varieties, from the latest electronic floor cleaners to the most advanced artificial intelligence systems. There are physical robots that can walk, talk, & dance like Honda”s Asimo. There are several kinds of manufacturing robots, mechanical arms that weld, bolt, and cut in car factories and assembly lines all over the world. There are robots that mimic facial expressions, talk online, attempt to think like humans, and some of the most advanced of our time are exploring Mars light years away from Earth.

Online Bots
On the web, “bots” is often used to describe user agents like those employed by search engines like Google and Yahoo. These are aka spiders, crawlers, robots, and more. However there is also a class of online bots that use such services like AOL Instant Messenger & MSN. They make use of programming languages like Perl and Visual Basic to interface with software or protocols to communicate with humans and find data. Many online bots use a form of AI to carry conversations with humans that interact with them, including languages like AIML, LISP, and others. Some don”t have an AI at all and cannot talk, but provide data or information in different ways. These bots can be separated into two classes:  AI/chat-oriented and service/information-oriented. Such bots could be one or both.

AI/Chat Oriented
These bots” primary purpose is to have natural language conversations with humans. Some of the famous chat-type AI are Eliza, Ractor, and the more recent Alice. Some of these had a core function inherent in their chat ability. Eliza for example was designed to determine the psychological profile of the person it spoke to, much like a psychologist. Today there are several thousand chat bots online, all in various forms with different purposes, features, and indeed even personalities. The primary interfaces are via instant messaging clients and websites, where the user can initiate a conversation. These bots are limited in scope in that they”re only designed for talking and cannot do much else beyond making assumptions based on natural language, although many are capable of limited reasoning and thinking.

Service/Information Oriented
Bots of this type are generally used to provide information on a variety of subjects to the user. Common applications range from providing daily horoscopes, local weather, and other data-dependent services. These bots typically don”t have conversational ability, but there are many that can do both, including some that do so through natural language. There are also specialized bots in this class, which provide very specific information or services.

These bots are the more intriguing class since they”re capable of so much more than just interpreting responses, and it is also this type that has made more of an advancement. Although both fields move at a rapid pace, providing information and data is a more practical application.

Where from Here?
The future of IM bots will be in providing specific information or services, especially in the enterprise and business world. Companies that utilize large databases or lots of information would be an ideal environment. This would be most advantageous where employees already use instant messaging in their daily collaboration and communication. Some of the possibilities:

An instant messaging or web bot can provide information to remote users without the complications of network logins or VPN.
Open source and well known languages make it easy to customize.
They don”t need web access, just an IM client and a connection.
Fast, ready access to just about any information in an environment most users are already comfortable with.

An entire range of information could be presented in this form. The applications are only limited by imagination and the programmer”s abilities.

Centralized information management would allow everyone to have instant access to the same information in real time.
A single bot could provide the interface to limitless amounts of company information and data for every employee.
Such systems could even be given their own AI personalities tailored to the specifics of the application itself.
Natural language could be used to provide data, eliminating the need for users/employees to learn complicated commands or procedures.
This data could easily be provided to cell phones and PDAs with instant messaging capabilities.

The list could go on, but the idea of such uses isn’t new. Although some are skeptical about the viability of a bot in such a role, there are successful apps already in production environments in the real world. Some interesting ones:

Amazon ASIN search bot from “Amazon Hacks” by Paul Bausch.
The Wall Street Journal bot
The HR Agent from Active Buddy
(Editors note: Active Buddy is now called Colliqus and has moved into automated customer service software that converses in natural language with users.)
AOL’s IM Service for the Hearing Impaired
Keeblers RecipeBuddy (Note: This now appears to be dead since keebler.com resolves to Kellogs now.)
IBM”s Lotus Sametime bots

Clearly even these uses have a downside since web/server based applications could easily provide the same services and already do, but latest trends show many systems being integrated with IM. A robot application of this nature could become an inexpensive and easy way to provide enterprise-level information and could easily integrate into existing server based systems. It is here in this arena that the use of bots becomes a more compelling solution. Microsoft has an interesting white paper about the subject. Although it focuses primarily on using the RTC Client API, it outlines bot-type applications.

The Future
Bots will always be around even if there is never any real mass appeal for their use. There is a strong hobby community, and many skilled coders operating them. There are several initiatives being taken in the instant messaging industry at the moment that may make the future a lot brighter for the concept though. Currently most of the major IM networks such as AOL & Microsoft have begun offering enterprise instant messaging products geared specifically toward business. This was in light of employees beginning to use consumer oriented IM networks for work matters, and several companies scrambled to provide IM gateway software or enterprise versions of their services.\r\n\r\nMicrosoft currently offers Live Communications Server to businesses for enterprise messaging and collaboration. Recently they announced future versions will allow it”s users to talk to others on AIM, MSN, and Yahoo, boosting IM interoperability. We may eventually see all instant messaging services able to communicate with one another, opening a whole new world. Enterprise systems like these also offer some tantalizing possibilities with IM bots, which could actually enhance such a system.\r\n\r\nSo while the future doesn”t have any killer app for bots out there just yet, clearly their use and research will help advance the science and some amazing technology has already demonstrated their usefulness. Although there is much debate on the subject, they will be around for a long time to come. No if, ands, or bots about it.

pure-ftpd

Pure FTP is a nice alternative to the standard VSFTP daemon that comes with many ”Nix flavors. It has the ability to authenticate against mysql, ldap, pam, and the passwd file. You can even chain the authentication methods together to check all or some of them if another fails, this is very nifty if you want a little redundancy in your logins. LDAP down? No problem! It”ll just use the next method you have enabled.

By default all configuration options are run at the command line when starting the server, but you can enable it to use a standard configuration file instead. I found it way easier dealing with the server this way. Setting up authentication with mysql was a snap, but of course required separate configuration of the db server to create the database, table, and users. If you want a quick and easy way of setting up users the regular old pam way will work by just creating them on the system.

Pure FTP is basically like vstfpd on steroids to me. The authentication options are nice and I”ll be giving it a try on my AD server at work to see how it goes. Centralizing the FTP logins by using Active Directory is an interesting idea. All in all, it”s a good alternative to my oft used vsftpd. You can find it at pureftpd.org