Category Archives: Open Source

Face recognition: Google, technology and the inevitability

Face recognition Google, technology and the inevitability

It was very striking announcement of Google about the potential use of facial recognition technology in Google Glass. First, because I have time using the same example of combining technologies dystopian dyes in some of my classes and lectures. And second, for the obvious timing referenced to in the announcement: taken from the recently updated policy Developer Glass.

Do not use the camera or microphone to cross-reference and present staff Immediately Identifying information anyone other than the user, treats including use cases Such as facial recognition and voice print. Applications do that this will not be approved at This Time.”

Which really means that “at this time”? Exactly what the ad itself implies, that in fact, Google is like crazy with the possibility that this technology mix, which is quite possibly already tested ad nauseum within the Googleplex to assess their level of trust is developed, but they simply do not want to be them which put on the market. Not wanting to be directly linked to a development that many see as the epitome of the loss of privacy, which has been talked about a long time, but simply as “triggers the same”.

Face recognition Google, technology and the inevitability

One way to avoid the objections that the United States Congress has shown on the device and that many people have, even before the device officially hits the market. Just as in origin a procedure for taking pictures that leave relatively safe component of privacy, but immediately was designed to take them another developed a simple nod. For many, the Google Glass will soon be invisible, part of normality. And of course, as soon as they develop the protocols used, the better. Avoid becoming a glasshole.

Google has important developments in the field of face recognition, applied since 2009 to a test of such a huge magnitude as Picasa. Developing Google Glass is made, as is customary in the company, as a open platform for developers, and in the same event at which this platform has a particular focus on is done the device can be easily rooted and you You can install what we goodly pleases, with a specific session on the topic at the Google I / O 2013 given by two engineers from Google, entitled ” Voiding your warranty: hacking Glass”, and with the clear message “Disclaimer: you ‘ll be stepping into uncharted territory and unsupported” Come on, come on … do not bother me! If anyone understands that message is just as remarkable that a jar of honey because amid a hive of developers, this is precisely Google.

Once you remove the lid of the pot, the development of technology is completely unstoppable and inevitable, and Google knows it. Yes, the United States Congress can say what they want, they can change laws may shock many … but unofficially, we have come this far. The technology can not be “un-invent”. Go considering its implications.

Microsoft and the future post-Ballmer

Microsoft and the future post-Ballmer

Slightly modified the famous slogan Microsoft 2010, ” Be what’s next ” to speculate a bit about the future of the company after the next start of the fired Steve Ballmer, the obvious responsibility of their fall in terms of influence and leadership, and the person able to lose one by one all the great technological revolutions a decade.

An article in MIT Tech Review, ” Why Microsoft’s Next CEO Should break up the company “, speaks of the need to divide a company turned into an incompetent bureaucratic monster to innovate, and speculates about a future with a series of mini-Bills separate and dedicated to the operating system, desktop applications, server applications, entertainment and activities on the network.

In 1998, one of the questions in the major field exam I faced ahead of obtaining my doctorate at UCLA was just that, a hypothetical scenario of splitting the company into parts. At the time, speculation had been the possible impact of the decision in United States v Microsoft antitrust, but many of the conclusions are applicable then now.

Personally, and from what I know of Microsoft, I do not think we’re talking about a company unable to innovate.

Certainly suffer a crisis of bureaucratic and sclerotic to be solved with the right leadership and cultural change, but I think a number of spin-offs are the way to solve this problem in a fantastically well capitalized company and does not require especially the economic resources that the process would be likely to generate.

My impression is that the main factor that has delayed Microsoft excesses are derived from a culture overly focused on closed and proprietary schemes. Microsoft mentality is focused on what is happening within the company, because there is a widespread belief that anything can be done with existing resources within it.

Microsoft and the future post-Ballmer

That vision closes the door to external sources of innovation: the company does not learn from its users, because these have generally been far behind a myriad of other actors such as resellers, partners, OEMs, integrators, etc. Customer proximity is essential in an era in which many of the innovations are precisely the study of patterns of use and how customers own and others accommodate innovation.

Stage, on the other hand, has changed completely. The most leading companies in terms of innovation does not get all of their own resources, they are dedicated to building platforms that allow and give support to the innovation of third parties, or through community development, or various types of structures that enable and encourage the integration of other creations. The opening also is a key to the development of quality products process, an essential element in the process of obtaining quick response to bugs and problems, and a guarantee of the right attitude.

What Microsoft should consider is how to compete in a scenario where open architectures have triumphed decisively over closed, and how to make that evidence impacts the development philosophies of the company. The simple challenge of considering open sourcing many products is a way to not only improve, but also to present a new attitude, a different picture: one of the keys in the Microsoft is the negative image due largely Steve Ballmer’s own generates many users.

Competing with products that have been developed in the context of an increasingly open stage requires just that: be much more open, to initiate a process of honest, constructive and can lead to positive dynamics both within the company and outside opening it. A process that requires not only the right leadership, but also the joint efforts of each and every one of the divisions of Microsoft. For such a process, it is more than possible that the company is better together than separate parts.

The evidence that led to Bill Gates in 2006 to determine the support of its founding research projects that were open source , must now come to the company he founded. Microsoft was one of the companies with their attitude and thanks to the principle of action and reaction, most favored the development of the open source mentality. Now, you must embrace a philosophy that simply has become a sign of the times.

How to raise a Microsoft-centric innovation and, above all, in the opening? This and no other is, for me, the main challenge will be to face the next CEO. And a challenge that decidedly not worth anyone.

Open Source Strategies

Open Source Strategies

Some comments to my recent post entitled ” Open Future “and some notes on the last LinuxCon allow me to retrieve an entry from 2009,” Understanding technology management in the era open source “, which talked about how to manage technology companies in the current scenario.

No, open source does not necessarily mean a company to “release it all and release it already”, as some pretend to fully understand simplistically. Sectarianism and quasi-religious visions of little help topic in a topic that things are far from black and white. The term open source describes a set of practices in production and development of software tools that promote access to the source code of the final product, which may include – and in fact included in the vast majority of companies – a wide range of gray. Companies do not always release all, or immediately, or in the same way.

The important issue, as I said in my post in 2009, is to understand how traditional philosophy fits the vision of the company based on resources ( Resource-based view of the firm ) is studied in all business schools, interpreted as “jealously protect specific sources of competitive advantage” in an environment like this, where it has been convincingly demonstrated, as stated Allison Randal ( Technical Architect of Ubuntu), that ” free software is a model primarily higher for software development.”

The literal interpretation of the Resource-based view of the firm tends toward a technological management based on secrecy in hiding outside of anything that could be considered a competitive advantage. This misinterpretation of the theory has led to several generations of managers obsessed with security, industrial espionage, with the “copy me”. As I mentioned in that entry a few years ago, “they see a company as a place that should reign absolute secrecy, and often fall into the stereotype of looking at communities of open source development as a kind of hippies communists responding to different schemes to theirs and that under no circumstances can you trust.”

Conversely, stereotypes can also act. Some developers of free software companies are like a bunch of opportunists who use parts of its code or use communities for support, but do not return to those communities that gave birth and naturalized to the products they use. And in both cases, of course, there is everything: no naive, exploited, shameless, smart and intelligent. There are positive, negative and neutral statements. And that precisely is the meaning of the word “strategy”.

Open Source Strategies

One of the things I tried to demonstrate in the penultimate paragraph of my recent post was that, today, it was very difficult to find the name of a major company in the world of technology along with the words “open source” and not finding no regard own page. This shows that, no less, is the need for all the companies of open source strategy, a series of guidelines to manage and understand what is going to be how they act in the current technological environment. Today, companies use technology.

Some create it, others acquire it, others obtained from different sources and adapt to a greater or lesser degree … and in that sense, you have to stop doing things in terms of fashion, timely availability or opportunities, and start them strategically. Cases such as IBM, that after focusing and services possible to obtain a greatly superior to its research and development performance through a careful strategy that allocates resources to open source, are extremely interesting.

Apple, let us as we say, is a similar case: not open all your code, but uses and returns a very substantial part, while other parts of their products (notably, the parts related to the user interface) remain strictly proprietary software. Google does the same, and like them, many others. Each day more, a substantial part of the competitive advantage that a company is able to obtain the technology depends on how open source strategy works.

Open Source Strategies

How does this fit into the vision of the company based on resources? Simply understanding that property as a resource. For a business, the ability to release their software and that you get the right level of care instead of falling in absolute skepticism and forgetting, can be a vital resource.

On SourceForge, the world’s largest repository of open source projects, there are over three hundred thousand projects, but few keep a really significant activity. That is, very few get the community so surely when they tried to get up there. Unlocking code is a very difficult task: for nothing is simply “publish a paper,” but demands quality monitoring, documentation and extra work so that code can actually be used or improved by the community that not all companies are willing to undertake.

Understanding how community development work, maintain a position and a good sense in them, choose which projects or what resources work in them, to know what to expect or appreciate what communities are more active than others and how that affects, for example, the choice of development tools or staffing issues … in many cases, beyond today to the traditional understanding of the technology responsible for a significant number of companies.

As it says Marten Mickos, CEO for many years MySQL, “any company with a technology strategy needs an open source strategy.” The supremacy of the philosophy of open source as a development methodology, to this day, is perfectly obvious. Now, for many, lack integrate it as a part of the business strategy: those who are more advanced in their understanding of being able to get great competitive edge. And business schools, of course, already tried this kind of topics are a significant part of the agenda that managers must learn to get along when they leave the classroom.

Government transparency and open source

Government transparency and open source

Estonia is, with a population of 1.3 million inhabitants, one of the smallest states of the European Union. It is a highly developed country with the highest per capita income of all the former Soviet republics, with a high and increasing score on the Human Development Index, and highly valued in terms of civil liberties, press freedom, economic freedom and education.

Moreover, it is one of the most wired countries considered in Europe and one of the leading world leaders in the development of open government and e -democracy. In the period 2000-2004, the Estonian government considered strategically invest in the development of network usage, and organized an adult education initiative funded by the private sector nationwide who managed to educate over a hundred thousand people, 10 % of the population, with rates of sustained internet use after the course of more than 70%. All Estonian schools are connected to the Internet, and students can view their grades, attendance, access content classes and homework, or final evaluations of teachers about their work.

Government transparency and open source

Since August 2000, the executive cabinet meetings take place in paperless sessions using a system of databases connected by the network, and can also be accessed in the Internet and in real time to a description of the costs incurred by the state. The electronic voting system based on electronic identity, the year of its inauguration in 2005, attracted 1.9% of the voters, was used in the last parliamentary elections in 2011 for 25% of the population.

The last step? Clear all possible doubt and help improve the system. And to that end, there’s nothing like opening the code. Two days ago, the entire server application used to manage e-voting code is available on GitHub (not the client side, to avoid giving too many facilities to the possibility of creating fake customers).

By posting the code, the Estonian government expected that any programmer can test it, find potential issues, vulnerabilities or security flaws, and above all, try the full transparency of the process (an auditing system will ensure that the software used in Election matches available in the repository).

A clear case of security through transparency, not of darkness. The opposite of what happened with Diebold machines used in the United States, precisely because it ended up being a secret development actors of all kinds of problems and controversies. No, it’s not the same managing elections in a country of 1.3 million inhabitants that over one hundred million, but clearly, the United States chose the wrong procedure.

Electronic voting systems are not at all simple. Your design should ensure that no one can falsify the result of the vote, they can only vote for that eligible citizens that their vote is counted only once, each voter can prove definitively that their vote has been placed correctly, a auditor to verify the integrity of the process, malicious participants can not disrupt or contaminate the results of the vote can be properly isolated and that nothing can link a voter with his vote, and that a voter can not prove to a third party which voted (to prevent vote selling).

Government transparency and open source

In the case of Estonia, a voter may vote as many times as you like during the period the vote is open – between the tenth and the fourth day before the election – and only count the last vote cast, even if it has been done in person. With this method it is put control in the hands of voters, making it difficult to be forced to vote for a particular option forced or sell their vote (as it would have a chance to change it later).

With the open sourcing, Estonia not only improves confidence in the system, it can be a reference for those countries seeking to build a similar system. In Spain, one of the pioneers in the deployment of electronic ID, there would be the possibility of developing technology. But obviously, not everything depends on technology: transparency is essentially a will, a desire to do things a certain way. In our country we are, sadly, to something else.

How Common is Linux in Australia?

linux_hosting

With this day’s web sites normally calls for simple web hosting plans regarding smooth performance from the website and also email transfer that may be essential. You will discover two modalities of web hosting planes are generally available, first can be Linux web hosting and second the first is Windows web hosting. Both have several advantages and drawbacks respectively but Depending upon small company owner’s demands suitable handled hosting plan may be selected for just a specific web site.

Well in the vast majority of normal predicaments, the common preference of small businesses for choosing web hosting plan moves to your Linux server web hosting plans in lieu of window server web hosting, because there are lots of optimistic element that influence visitors to choose this.

Expenditure Factor

The simple truth is that Linux web hosting plans tend to be much cost effective than House windows servers strategies. For all small business people the web hosting price tag matters widely. They desire to receive many options or it is possible to say facilities with a tiny spending. Most of which not expected supreme sort of security and so they also don’t bother whether it is MS-SQL repository server or some thing else in order to develop their repository. Thus Linux server is the foremost option for the children at genuinely least charge with several kinds of features.

Wide open Source Factor

Linux is an Open Source code and also enables men and women from everywhere in the world to advertise their unique knowledge for the development. This is one of the main reasons where Linux servers contain the power for you to eject House windows servers a long way away from competitive hosting market. Now the day’s, because farthest suppleness Linux becomes an original platform for lots of fresh technologies which have been mostly found in competitive web hosting industry. Which means a Linux server provides you with numerous characteristics than House windows servers may does.

Scripting Vocabulary Factor

There are numerous scripting terminology available on the web like Perl, Python, Capuccino, and PHP to build your site. All these scripting languages and have been designed on Linux / UNIX web servers and they cannot required ‘microsoft’ support to perform them. But when you use those scripting languages and this can be run with windows machines like ASP. Net then you certainly require MS-SQL in order to develop the database along with supports associated with Microsoft. Now it’s easily simple to comprehend that for small businesses Linux hosting is most beneficial to web host their website into the internet.

Reliance Factor

Now the day’s Linux can be vastly referred to as really stable and trustworthy. This ensures that Linux machines are tiny faster, user friendly and esteemed as neo crushable over a Windows server. Therefore, Linux servers provides more upgraded facilities for the website at the very least outlay.

Familiarity and Doing Speed Factor

Linux machines have decades of practical knowledge for providing services when compared with Windows machines in web hosting industry. There isn’t any such recommended difference in executing speed between Linux machines and House windows servers, nonetheless it is seen that Linux machines are slightly quicker in processing normal websites.

Efficiency Factor

Windows support ASP the programming terminology supported through Microsoft, which allows people for you to fabricate energetic database motivated websites by connecting using a MS-SQL server or Access server. It raise the production cost too. Linux none requires ASP language or MS-SQL databases to build dynamic repository driven websites, because this simply employs MySQL database which can be freely offered. Most people abroad apply for Australian Visa through Linux as it is very secure and virus-free, and the fees and other transactions can be paid through Linux. with ease.

Can’t Leave Windows?

Well that’s debatable and many hard core Linux users would beg to differ.  I’ll take the middle road on this one.  If you can do I encourage you to dump Windows as your main operating system (OS).  Unlike many others I don’t advocate giving up Windows entirely as most of the world still run Windows.  When given the choice chose Linux, but when not give a choice like in most corporate settings you’ll be stuck with using Windows (at least for now).  I still keep a Windows boxes at home operational to keep up on my Windows skills.

windows-vs-macintosh-vs-linux

Why can’t there be a middle ground between the two OS’s?  You want your cake and Edith too right?  Well perhaps one of these solution will work for you.

You have 3 main choices:

  1. Keep a Windows box around

  2. Dual boot your PC

  3. Run an emulator or virtual machine

What I’ll be discussing below deals with emulators and virtual machines.  Emulators and VM’s provide functionality to emulate or install another environment from within your current environment.  What does that mean?  Well for the sake of keeping it simple, it means that you can run one operating system inside of another.

The first choice you’ll have to make is what OS will be be you base.  By base I mean what system will you run 98% of the time, Windows or Linux?  Again, I recommend Linux.  The OS you want to use that left over 2% of the time will be installed or emulated on top of your base system.  Once you have installed your base OS you must select an emulator / VM.  The emulators / VM’s I will be discussing below are WINE, Win4Lin, CoLinux, and VMWare.  The base OS you select will play a part in which of the products I discuss below you can utilize.  I use a combination of WINE and VMWare running on a base OS of Mandrakelinux 10.

So by now you are asking what do they do, how do they work, and what are their differences?

Wine

“Wine is an Open Source implementation of the Windows API on top of X and Unix.  Think of Wine as a Windows compatibility layer. Wine does not require Microsoft Windows, as it is a completely alternative implementation consisting of 100% Microsoft-free code, but it can optionally use native system DLLs if they are available.”

It’s freely available with most Linux distro’s.  I have had some success with this for programs like WinZip and WinMX, but for the most part I have found that programs don’t install and or run very well.  It’s a great idea, but still a work in progress.  It probably works a lot better with older programs since the fake windows registry it sets up mimics the Win 98 registry.  This is most likely the cause of the install issues as well.

Win4Lin

“NeTraverse Win4Lin 5.0 enables Linux users to run the Windows operating system programs concurrently with their existing Linux operating system without additional hardware or the need to dual boot. This configuration dramatically improves productivity and saves you money by reducing hardware and OS license upgrade costs. “

Works with Windows 95/98/ME, but I don’t think it does 2000 or XP.  Never actually used it, but it sounds pretty good.  I have seen some decent reviews, but again I’m not interested in older Windows systems before 2000 or XP.  It’s not open source and does cost about $89 for a license.

VMware

“VMware Workstation is powerful virtual machine software for developers and system administrators who want to revolutionize software development, testing and deployment in their enterprise. Shipping for more than five years and winner of over a dozen major product awards, VMware Workstation enables software developers to develop and test the most complex networked server-class applications running on Microsoft Windows, Linux or NetWare all on a single desktop. Essential features such as virtual networking, live snapshots, drag and drop and shared folders, and PXE support make VMware Workstation the most powerful and indispensable tool for enterprise IT developers and system administrators. VMware Workstation works by enabling multiple operating systems and their applications to run concurrently on a single physical machine. These operating systems and applications are isolated in secure virtual machines that co-exist on a single piece of hardware. The VMware virtualization layer maps the physical hardware resources to the virtual machine’s resources, so each virtual machine has its own CPU, memory, disks, I/O devices, etc. Virtual machines are the full equivalent of a standard x86 machine.”

This one happens to be the most expensive one in the bunch at $189 for a license.  However, if you can afford it I feel it is the best!  I use this one the most.  You can install this on Windows or Linux.  Once installed you create separate virtual machines for each OS you would like to run.  It supports many versions of Linux, almost all versions of Windows, DOS, FreeBSD and Netware.  You simple boot your Linux or Windows PC, fire up VMWare, and put in the disk of the guest operating system you would like to install.  It will install the complete OS!  Then you simple boot the guest OS whenever you need it without having to shutdown your base OS.  You can ever share files back and forth between your guest OS and the base OS.

I have used this solution in the past.  I was running run Mandrakelinux 10 at the time and I had successfully installed Windows 2000 and XP virtual machines on top of Mandrake.  Both OS’s operated as if I was running Windows natively.  You even have the option to make them full screen.  It actually looks like you are running Windows.  Guess what?  When Windows crashes as it’s known to do, your virtual machine will most likely crash with it, but Linux contains the crash and continues to run unaffected.  You simply reboot the virtual machine.

Cooperative Linux (coLinux)

Last, but not least
we have coLinux.  This a relatively new project that is still
very much under development.  You may not have a use for it yet,
but keep an eye on it. “Cooperative Linux is the first working
free and open source method for optimally running Linux on Microsoft
Windows natively. More generally, Cooperative Linux (short-named
coLinux) is a port of the Linux kernel that allows it to run
cooperatively alongside another operating system on a single machine.
For instance, it allows one to freely run Linux on Windows 2000/XP,
without using a commercial PC virtualization software such as VMware,
in a way which is much more optimal than using any general purpose PC
virtualization software. In its current condition, it allows us to
run the KNOPPIX Japanese Edition on Windows.”

Good luck and happy emulating!

Linux Commands

For those of you familiar with UNIX making the transition to Linux is a breeze.  Those of you coming from Windows this will be quite an adjustment.  Luckily, it has gotten much easier over time.  Now with great desktop environments like KDE & Gnome the need to hit the command line to perform basic operations is minimal.   Average users who will utilize there PC for email, chatting, and browsing the net can probably skip this section as it will be more information than you need.

Now for those looking to explore the real power that Linux has to offer this is the stuff for you.  If you are coming from the Windows world the closest thing you have to compare the Linux shell with is the Windows command prompt and DOS.  Sometime things just can’t be done through a GUI or at least not done as well.  Therefore, it is often necessary to get behind the GUI interface (i.e.) Gnome, KDE, etc.  The Linux command line goes by many names (AKA: the shell, bash, , c-shell, k-shell, terminal, and/or konsole).

The most common uses of the shell for intermediate users will be launching applications, installing software, building RPM packages, etc.  I don’t profess to be an expert at the Linux shell.  I’m learning just like you.  There are so many commands you can’t possibly learn and/or retain them all.  Don’t worry about it.  You’ll find that you use the same dozen or so over and over again.  You’ll also find that Linux is so flexible that there are 10 different ways to do something,  You get to pick which way works best for you.  There are plenty of command reference sites and Linux even provides it’s own reference guides called the man and info pages.  Man meaning manual.  You can call these by typing “man” followed by the command you want to learn about (ex) man tar.

Here’s one reference site I use a lot.

Here’s a great tutorial on the use of the Linux shell / bash and the man pages.

The History of Linux

So you may be asking yourself what is Linux and/or how did it get started? Let start of with explaining what Linux is. Chances are you have probably heard of it by now, but I’ll pretend like you haven’t. Ever heard of Unix? Again, I’ll pretend like you haven’t.

I’ll start off with something I’m sure you are familiar with and then work into the unknown. Microsoft Windows is an example of an operating system. It’s the one people are most familiar with as it currently holds about 95% of the desktop PC market. They have many different version like Windows 95, 98, NT, 2000, and XP. Unix is also an operating system. It was developed around 1970 by a group of programmers working at Bell Labs. Mind you this was well before Microsoft Windows ever started. “After three decades of use, the UNIX computer operating system is still regarded as one of the most powerful, versatile, and flexible operating systems (OS) in the computer world. Its popularity is due to many factors, including its ability to run a wide variety of machines, portability to other systems, and its multi-user / multi-processing capabilities. ” (Quote taken from here)

Linux and Unix sound a lot alike don’t they? Well I assure you there’s a good reason for that. “It was 1991, and the ruthless agonies of the cold war was gradually coming to an end. There was an air of peace and tranquility that prevailed in the horizon. In the field of computing, a great future seemed to be in the offing, as powerful hardware pushed the limits of the computers beyond what anyone expected. But still, something was missing. And it was none other than the Operating Systems, where a great void seemed to have appeared. For one thing, DOS was still reigning supreme in its vast empire of personal computers. Bought by Bill Gates from a Seattle hacker for $50,000, the bare bones operating system had sneaked into every corner of the world by virtue of a clever marketing strategy. PC users had no other choice. Apple Macs were better, but with astronomical prices that nobody could afford, they remained a horizon away from the eager millions. The other dedicated camp of computing was the Unix world. But Unix itself was far more expensive. In quest of big money, the Unix vendors priced it high enough to ensure small PC users stayed away from it. The source code of Unix, once taught in universities courtesy of Bell Labs, was now cautiously guarded and not published publicly. To add to the frustration of PC users worldwide, the big players in the software market failed to provide an efficient solution to this problem. A solution seemed to appear in form of MINIX. It was written from scratch by Andrew S. Tanenbaum, a Dutch professor who wanted to teach his students the inner workings of a real operating system. As an operating system, MINIX was not a superb one. But it had the advantage that the source code was available. Anyone who happened to get the book ‘Operating System’ by Tanenbaum could get hold of the 12,000 lines of code, written in C and assembly language. For the first time, an aspiring programmer or hacker could read the source codes of the operating system, which to that time the software vendors had guarded vigorously. A superb author, Tanenbaum captivated the brightest minds of computer science with the elaborate and immaculately lively discussion of the art of creating a working operating system. Students of Computer Science all over the world poured over the book, reading through the codes to understand the very system that runs their computer. One of them was Linus Torvalds.” (Quote taken from here)

“In 1991, Linus Benedict Torvalds was a second year student of Computer Science at the University of Helsinki and a self-taught hacker. The 21 year old sandy haired soft-spoken Finn loved to tinker with the power of the computers and the limits to which the system can be pushed. But all that was lacking was an operating system that could meet the demands of the professionals. MINIX was good, but still it was simply an operating system for the students, designed as a teaching tool rather than an industry strength one.

At that time, programmers worldwide were greatly inspired by the GNU project by Richard Stallman, a software movement to provide free and quality software. Revered as a cult hero in the realm of computing, Stallman started his awesome career in the famous Artificial Intelligence Laboratory at MIT, and during the mid and late seventies, created the emacs editor. In the early eighties, commercial software companies lured away much of the brilliant programmers of the AI lab, and negotiated stringent nondisclosure agreements to protect their secrets. But Stallman had a different vision. His idea was that unlike other products, software should be free from restrictions against copying or modification in order to make better and efficient computer programs. With his famous 1983 manifesto that declared the beginnings of the GNU project, he started a movement to create and distribute software that covered his philosophy (Incidentally, the name GNU is a recursive acronym which actually stands for ‘GNU is Not Unix’). But to achieve this dream of ultimately creating a free operating system, he needed to create the tools first. So, beginning in 1984, Stallman started writing the GNU C Compiler (GCC), an amazing feat for an individual programmer. With his legendary technical wizardry, he alone outclassed entire groups of programmers from commercial software vendors in creating GCC, considered as one of the most efficient and robust compilers ever created.

By 1991, the GNU project had created quite a few open source applications. The much awaited Gnu C compiler was available by then, but there was still no operating system. Even MINIX had to be licensed. Work was going into the GNU Kernel HURD, but that was not supposed to come out for a few years. (Sidenote: HURD is still being developed to this day. Designers have changed paths several times continuing to delay it’s release.)

That was too much of a delay for Linus.” (Quote taken from here)

So to help understand this, think of the programs you use most often like Internet Explorer, chat programs, email programs, and Microsoft Office (i.e.) Word, Excel, Outlook, etc. Those are stand alone applications, all built to run on top of the Micro$oft Windows operating system. The Gnu project was helping to lay the groundwork that would eventually help to form the application side of their operating system. Their hope was that someday their applications would site on top of the GNU Kernel HURD. The major component behind any operating system is the Kernel. So what is a Kernel? Basically, it’s the mediator between the computers application software and its physical hardware. It helps to manage things like the PC’s memory and the CPU.

Linus was searching for the power of a UNIX machine in the comfort of his own home. The average user could never afford to purchase UNIX or the hardware needed to run it. Linus was determined to built such a system. He buckled down and coded away and by mid September 1991 Linux version 0.01 was released. He released the source code for his Kernel on to the Internet for others to grab and use. He though of it as a personal hobby and never thought it would amount to much. He asked for suggestions and comments on the Kernel. People latched on. Many people began hacking away at the Kernel and offering suggestions and patches Linus became the gatekeeper of the Linux Kernel source code. From that point on all the way up through to present day, Linus has decided what makes it into the official Kernel source code. Each version has become increasing faster, more stable, and offered increased support for newer types of hardware. In fact, Linux tends to require a less powerful PC that Windows to run.

It didn’t take long for people to put the two projects together. Linus had created the very thing that the GNU project was lacking. People began running the Linux Kernel with the GNU applications on top of it. Over time more and more application were being developed and support for hardware was growing. “Soon, commercial vendors moved in. Linux itself was, and is free. What the vendors did was to gather up various software applications and put them in a distributable format, more like the other operating systems with which people were more familiar. Red Hat , Debian, Mandrake, and some other companies gained substantial amount of response from the users worldwide. With the new Graphical User Interfaces (like X-windows, KDE, GNOME) the Linux distributions became very popular.” (Quote taken from here) This commercial vendors took the Linux Kernel, bundled it with GNU projects and other open source applications, added a few of there own tweaks, and released it as a complete operating system. Each distribution has it’s own claim to fame. Some claim to have rock solid security, others claim to look and run just like Windows, and some focus on user-friendliness, etc.

As a closing note it’s important that you realize that Linux is not the whole operating system. It’s simply the Kernel which helps power the operating system. Many people believe that the entire OS is Linux. Richard Stallman would prefer that everyone call Linux distributions, which bundle Gnu apps with Linux, Gnu/Linux. I would say that’s not a lot to ask from the man who gave a the Gnu Project and the Gnu General Public License (GPL) license that so many open source applications utilize.

MP3_Player

MP3 Players & Linux

This article will be a bit off my normal routine.  I usually like to try things out before I write about them, but in this case I am making an exception.  Mostly because I’m not sure when I’ll have the time to try some of this stuff out, but I thought it was interesting enough that I wanted to share with everyone now.  Anyway, this article will focus on popular MP3 players and open source programs that either work on them or with them.

MP3_Player

The Apple iPod

Below are a few projects that may interest iPod owners…

iPodLinux

First up is the iPodLinux project.  The goal of this project is to port Linux to the iPod.  Meaning you actually run Linux directly on your iPod.  The project has been successful in porting a kernel and has developed a user interface called podzilla.  It also appears a bunch on small applications have already been developed like file browsers, image viewers, games, etc.  You can find details on the applications here & some screenshots here.  It looks as if they have even had some success getting video to play on the iPod Photo.  So what versions of the iPod work?  From their site, “iPodLinux is currently safe to install on 1st, 2nd, and 3rd generation iPods. Development is currently on-going on later generations of iPod, including the fourth generation click wheel, mini, U2, Photo/Color, Nano, and Video.”

The only iPod I have is the Shuffle so I will not be trying this out for a while.  When my Dell DJ breaks down and it’s time for a new player perhaps I will buy an iPod then and try it out.  When that happens I’ll report my impressions of the project.  However, from the outside I have to say it looks promising.

Next up….. an iTunes like application

SharpMusique
Ok so the iPodLinux project is for putting Linux on your iPod, but what about something that lets you purchase music for your iPod like iTunes.  iTunes only runs on Windows and Macs so what is a Linux user to do?  Don’t fret.  That’s the gap SharpMusique tries to fill.  An application developed the by the man many have come to know as DVD Jon (the man who brought DVD copying to the masses).

SharpMusique, originally known as QTFairUse, can now be used on Linux systems.  It bypasses Apples DRM allowing iTunes bought music to be played on Linux.  The application allows users to perform the following actions with the iTunes Music Store:

  • Preview songs
  • Signup for an account
  • Buy songs and albums
  • Redownload songs that you bought with SharpMusique
  • Redeem Pepsi caps
  • Redeem gift certificates

You can find out more information about the project here.

gtkpod
Next up is something to manage your music files, sync them with your iPod, etc.  That is the goal of the gtkpod project, “gtkpod is a platform independent Graphical User Interface for Apple’s iPod using GTK2. It supports the first to fourth Generation as well as the iPod mini, iPod Photo and the iPod shuffle.”  Here are some good screenshots.

Next up is the Dell DJ…

Dell’s Digital Jukebox (Dell DJ)

There are several interesting projects involving the use of the Dell DJ with Linux.

Gnomad2
First up is Gnomad2.  “Gnomad is a GTK+ client program for the NOMAD Jukebox, using libnjb and libid3tag to handle the jukebox communications and ID3 tagging procedures.”  Not originally written for the Dell DJ, but does work since the Dell DJ is based on the player that Gnomad was created for.  This basically works like the Dell Jukebox Explorer works in Windows.  Here are a few screenshots.

DJSync
Gnomad2 is great for managing your music files, but it lacks the ability to sync up with your DJ.  That’s where this project comes into play.  DJSync allows you to sync the DJ with your local music library on Linux.

So that about wraps it up for this article.  Hopefully you have found this useful.  If you have any experience with, opinions about, or alternatives to these projects please post your comments.

jchat

jChat – YUI, Jaxer, & ActiveRecord

jchat

For those familiar with MiaCMS you’ll already know I’m a huge fan of the Yahoo! User Interface Library (YUI).  I recently finished up the JavaScript work for version 4.8 of MiaCMS.  With some free-time on my hands I figured what better way to fill it than with a new project?  So I set out to learn some new technology and see how I might mash it up with some existing skills like YUI.  The new technologies I decided to experiment with were Aptana’s Jaxer and their new ActiveRecord.js framework.

Jaxer ships with a number of basic samples, but I’ve seen quite a few people online in search of more complex examples and specifically ones that make use of the new ActiveRecord.js library.  The extended example I developed is called jChat.  jChat is fully functional chatroom application that demonstrates integration of the following web related technologies; HTML, CSS, JavaScript, MySQL, YUI, Jaxer, and Activerecord.js.

jChat’s DOM access/manipulation, event handling, and animation code makes heavy use of YUI.  The YUI Library is a set of utilities and controls, written in JavaScript, for building richly interactive web applications using techniques such as DOM scripting, DHTML and AJAX.

Jaxer is an Ajax Server that will allow us to built entire applications using JavaScript and Ajax.  The theory being that JavaScript experts can build applications without needing additional server side languages (ex) PHP.  There is nothing wrong with PHP, in fact is just so happens to be one of my favorite languages, but the important point to be made here is that Jaxer eliminates the requirement for additional server side languages.  That being said you don’t have to do away with server side languages and in some cases it might make a lot of sense to utilize Jaxer with a server side language like PHP.  Another added bonus developers get with Jaxer is code reuse.  Typically we see develops implementing similar functionality in multiple languages to handle things like data validation for example.  With Jaxer it is possible to write the JavaScript validation code once and utilize it both client and server side.  This could therefore lead to less debugging, simpler testing, and faster time to market.

ActiveRecord.js is a cross browser, cross platform, JavaScript ORM.  ActiveRecord.js makes it easy to begin working with databases in JavaScript on the client or server, with Google Gears and Chrome, Aptana Jaxer, Adobe AIR or any platform supporting the W3C HTML5 SQL Specification (currently Webkit and iPhone).  Overall I found ActiveRecord.js simple to integrate and a pleasure to use.  I found a few bugs initially (mostly just some sytanx issues preventing proper minification).  Thanks to the power of github I was able to fork it, make some modifications, and submit a pull request which they ultimately accepted and merged with the master branch.  I’ve been working on another Adobe AIR related project which I started before the first beta release of ActiveRecord.js.  It makes use of AIR’s native support for SQLite, but I’ll more than likely rewrite the database code to make use of this new library which should have the added benefit of easing application updates/database migrations down the road.

I’ve put all the source code up on github so that others can benefit from my learnings.  Chatrooms have been done a million times so jChat is less about the overall application functionality and more about the pairing of technology.  Feel free to download, learn from, and/or fork jChat on github – http://github.com/cauld/jchat/tree/master.  Comments, suggestions, and contributions welcome.