Nov 13, 2007

Setting up my favorite work environment: Part one - VMWare player and Ubuntu

Finally, I have finished setting up my working environment in the new company. Most of my time spent on configuring a "Gutsy Gibbon" running in a vmware player. The image comes from thoughtpolice (http://www.thoughtpolice.co.uk/vmware/). I did some customization including adding a new virtual disk and install a couple of new packages including sun-jdk-1.6 and Samba. I'd like to share some of my experiences and lessons learned here.

So let me start from the begining, get myself interpolate into the thoughtpolice ubuntu release. There is no "root" in this Linux distribution. If you need root privilege to do something, you need to invoke the command by "sudo your-command your-argument-list"; If you really want to see "#" prompt char, especially when you are trying to configuring your system like me, you can do this "sudo bash". Note, by default this thoughtpolice release provides a account "notroot" with password "thoughtpolice" which can be used to sudo into root privilege. While I create a new account "greenl" I find it is not allowed to sudo into root privilege. /etc/sudoers does not give me any clue. What you can do is to
edit /etc/groups file, and add your new username to each line "notroot" is added in. After I have done this, my own account "greenl" can be successfully substitude with root user privileges.


Now that I am a supervisor, my ambition grows up. The default thoughpolice release provides 4 virtual disk files which can give you up to 8GB in your virtual linux world. That's not fair! Today's typical HD has 160GB space! And I know I am gonna install Websphere and DB2 in my ubuntu, so 8GB is not satisfied. I need much more! What I have done is to go to google and ramble for while. Not long I found a hand-on empty virtual disk package created by John Bokma using freeware Qemu. And I pick up the largest one (20GB) to append my thoughtpolice ubuntu virtual disk space. It is not so easy as copying and pasting a file. Beyond that, you need a few additional steps to transfer reality file into virtual disk space:
  • Open the ubuntu-server-7.10-i386.vmx file using whatever your favorite editor (don't get scared by the .vmx extension, it is 100% a plain text file). You will find something like
    • scsi0:0.present = "TRUE"
    • scsi0:0.fileName = "ubuntu-server-7.10-i386.vmdk"
    • scsi0:0.writeThrough = "TRUE"
    • ide1:0.present = "TRUE"
    • ide1:0.fileName = "auto detect"
    • ide1:0.deviceType = "cdrom-raw"
  • And you should see that we can interpolate something here for our new 20GB virtual disk file, in my final version of ubuntu-server-7.10-i386.vmx file, the new lines I added are
    • ide0:0.present = "TRUE"
    • ide0:0.fileName = "ubuntu-server-7.10-i386-ide.vmdk"
  • You should know that "ubuntu-server-7.10-i386-ide.vmdk" is just the new virtual disk file we have added. But is that enough? After you restart your virtual machine, and log into your linux virtual box, you will see nothing has changed. Then think it twice, what if this is a real linux box and you added a new harddisk into your machine, what will your linux react to it? It will find a new device, but that's all. So you need to do something to let Linux be able to use your new device, right? Yep! You need to run sudo fdisk to create new partition on the raw disk, and mkfs.ext3 to "format" your new partition, and finally edit your /etc/fstab file to mount the new partition to your file system upon system boot.
These are exactly what I have done to expand my vmware ubuntu disk space. A little bit more words I want to say about this is my mounting point of new partition: /usr. The reason I choose /usr is by default system will install new packages to that directory. Be careful before you go direct to "sudo vi /etc/fstab" you should at least backup your current /usr directory by "sudo mv /usr /opt", and then update your PATH environment variable. In your first boot immediately after you have changed /etc/fstab file, you will find the system does not behave normal, surely because of your /usr is now an empty directory. So what you should do is "sudo cp -R /opt/* /usr" and "sudo rm -R /opt". Restart your system and rock!

Next time I will share my findings with setting up convenient communication facilities between host and virtual system, including setting up ssh and samba.

Nov 12, 2007

Abstraction, Encapsulation, Information hidding: OOP and AOP

From "Code Complete 2", Abstraction says "You are allowed to look at an object at a high level of detail". Encapsulation says "Furthermore, you are not allowed to look at the object at any other level of detail". I don't think there are other way can elaborate these two concepts to such a simple and clear level.

Also from "Code Complete 2", There are two secrets about "Information hiding":

1. Hiding complexity so that your brain doesn't have to deal with it unless you specifically concerned with it
2. Hiding sources of change so that when change occurs, the effects are localized

Actually the two points defines two sides of dealing with the essential problem of software engineering: managing the complexity. And different methodologies and philosophies are developed cater for this essential problem, including OOX (where X can be D, or P), and AOX (Aspect Oriented X).

Purely Object Oriented approach save you from concerning about how a specific functionality is implemented, i.e. you just need to know what function you need to call, you don't need to know how that function is implemented.

Aspect Oriented approach moves one step further from there, you even don't need to think about what function you need to call if that function is beyond your current concern, e.g. you don't need to think about logging, error handling or authentication while you are coding/designing for Account Transfer functionality. Instead your brain deals with only one aspect of the program at a time in AOP, and at a super time your brain will become a super brain that think about how all aspect will be weaved together, of course, the super brain will not dealing with how each aspect works, it just concern about the interaction of aspects.

Steven define "Inheritance" as defining similarities and differences among objects (like full-time employee and part-time employee). That is correct. But there is a problem: how you can clearly define the similarities and differences among object. If you really keep you updated about software design and implementing technical development, you should be aware that "Inheritance" is much less a referred good approach now than, say 10 years ago. I think the reason is, while "Inheritance" provide one approach for concept modeling and method reusing, it suffers from an ugly problem with software structure - high coupling. There are many good discussions about this topic, please refer to http://www.travisswicegood.com/index.php/2007/10/11/why_class_inheritance_sucks for detail.

One interesting essay about Inheritance is that "The difference between is-a and has-a relationships is well known and a fundamental part of OOAD, but what is less well known is that almost every is-a relationship would be better off re-articulated as a has-a relationship." So the gist is to favor composition to Inheritance. Well, I would like to say it is a good approach, and I follow that in my systems. What I want to say is there is one concept hiding from curtain: Interface publication. Whenever you are trying to use composition paradigm, you are actually defines an interface which is implemented by both the composed class and the composition blocks. So the problem becomes to what level you want your interface be granulated. The more fine-grained your levels are, the more elaborated your system are, and the more easier to re factory the system if needed. On the other side, the more coarse-grained, the less complexity your system is and in turn the transparency level is improved. Just as Steven says, "the challenge of design is to creating a good set of tradeoffs from competing objectives.

Nov 7, 2007

Stratification and reuse

Steve defines Stratification ans "trying to keep the levels of decomposition stratified so that you can view the system at any single level and get a consistent view". Later in his following example, he describe a common scenario that when people trying to build a "modern" system over "a lot of oder, poorly designed code", it is better to design "a layer of the new system that's responsible for interfacing with the old code".

Theoretically, this is a good and reasonable advice. But in real world, sometimes the advice is misinformed. The point is that your "modern" system will be outdated and some followers might consider your system be poorly designed when they are building the future system on your "modern" system, hence an additional layer will be added between your system and the future system. And finally a system will become a sandwich with many layers, with each layer built on an assumption that it's base is poorly designed
and hence is not well stratified.

And here comes up with two questions:
1. how to prevent your "modern" system from be stratified some day ?
2 when should we stop pile up the sandwich and build our system from gound up ?

In one of my previous projects, I have ever found some code history trace back to 1980s. And there are some strange layers created to stratify legacy code from new codebase. That strategy works, but I don't know if we can save a lot of money and time if we built the system from ground up. (The whole product last for 3 years and costs more than 100 million dollar).

Another interesting story is after revolutionary changes of codebase, firefox (previously called firebird) reborn from ashed netscape navigator.

So it is really a challenge to make the decision when is the good time to abandon your legacy code instead of to reuse them.