Feeds:
Posts
Comments

Archive for the ‘linux’ Category

To quote abigor on slashdot

In all fairness, no platform is perfect, let’s face it. You seem to be commenting on OS X (hard drives, 3d performance, etc.), so let’s see:

If you want non-working cut and paste (the general case is it only works for text), no 3d performance at all, barely any wireless support, no commercial software support including de facto standards like MS Office and Photoshop, no games, amateurish and inconsistent guis, etc. ad infinitum, then run desktop Linux.

If you don’t mind a pretty substandard operating system in return for all the software you could ever want and you don’t need Unix, run Windows.

If you want a usable, well thought-out desktop Unix with lots of commercial software (though much less than Windows), good open source and open standards support, and you don’t care about games, run OS X.

As cliche as it sounds, it’s all about what works best for you.

It has been almost four years since I first moved my personal computing to Mac OS X – during the tiger years – and I have enjoyed every bit of the experience. After two more major OS releases and several flawless “migration assistance” experiences, I find I have had a reliable platform I could accumulate personal data over years. Long gone the old time of messing around custom Linux kernels, crafting scripts to migrate data from mbox to maildir and building custom .deb to enable 3D effect in my favorite window managers. Though I’ve had some fun and killed some lonely moments, this constant dissatisfaction with the system and nonstop changes to it not only are distractive but also put my valuable data in refugee status all the time. At the end of the day, computer is just a tool. Enjoy many years of photos well organized in iPhoto and ready to be used in whatever document you are creating; have a short coffee break after unpacking your shinny new iMac 27 before the system is ready to be used with all your settings and data automatically migrated in an hour; or fire up Terminal.app, MacVim and Espressor and get some work done.

Read Full Post »

CentOS differs from many other distros by enabling root account during setup. I prefer the Ubuntu’s (and OS X’s) way of using a separate admin account and having root account disabled. When there is a need to perform administrative task, just run the command with sudo and easily prevent the risk of abusing root privileges and doing stupid things. Following this guide, I was able to make this work on CentOS.

  1. First, log in as root account. You can switch to root account from any account by running su and typing the root password.
  2. Enabling sudo. If you are not comfortable with vim, run

    export EDITOR=gedit
    

    first. Now run

    /usr/sbin/visudo
    

    The lines starting with # are comment lines and will be ignored. Just uncomment the following line:

    # %wheel  ALL=(ALL)       ALL
    

    by removing the # at the beginning. This line means that anybody in the group wheel can use sudo to run anything from anywhere.

  3. Add an account to group wheel. For example, if the account you use to perform administrative task is isteering, run

    gpasswd -a isteering wheel
    

    Now you can sudo from user isteering

  4. Disable root account. This is done by running passwd to lock the account:

    passwd -l root
    

It is quite obvious after we perform the above steps, we have just created a second root account: the user isteering is exactly the same as root user, just having a different name. So we have not added much protection, if the attacker can guess the name of this new account. So you might want to consider limiting where the user can log in from. Use your favorite editor to edit file /etc/security/access.conf. Add the following lines for the admin group:

-:wheel:ALL EXCEPT LOCAL 192.168.1. 72.14.207.99

This will deny user in group wheel to log in from anywhere but 192.168.1. subnetwork (note the suffix dot) or host 72.14.207.99. You still need to add this line

auth       required     pam_access.so

to /etc/pam.d/sshd to tell SSH server to consult the access control, otherwise SSH server by default will ignore this access control mechanism built in PAM.
References:

Read Full Post »

Install Sun Java and Tomcat on CentOS 5

I am not a big fan of any Redhat or derivative distributions, but this time I was forced to use CentOS because the RAID card of the server is only well supported by RHEL and (therefore) CentOS. As a newbie of RPM and yum, my way of doing things can be quite stupid. So I’d appreciate your comments and corrections.

First, the default Java environment on CentOS is GIJ, as are most Linux Distros. So the first thing to do is to get official Java 6 installed. Though there are tutorials creating RPM package from Sun’s distributed files, I am just setting up this one machine and will just install Sun’s binary distribution directly.

  • Download the binary distribution from Sun’s website: http://java.sun.com/javase/downloads/index.jsp. Make sure you download the “self-extracting file” for “Linux Platform”.
  • You can also copy the URL to the download file, and use `wget’ to download from command line. What I did was:
    cd ~/Desktop
    wget http://www.java.net/download/jdk6/6u10/promoted/b32/binaries/jre-6u10-rc2-bin-b32-linux-i586-12_sep_2008.bin
    
  • Now run that installer file. Before you run it, you must tell Linux it is safe to run it, and then run it:
    chmod u+x jre-6u10-rc2-bin-b32-linux-i586-12_sep_2008.bin
    ./jre-6u10-rc2-bin-b32-linux-i586-12_sep_2008.bin
    
  • This just extracts the JRE into the current folder. We need to move it to a permanent location of your choice. Any location would work. I chose /usr/lib. So I did:
    mv jre1.6.0_10 /usr/lib/
    
  • Because we might update to a new JRE in the future, and we hate to change all things that are dependent on JRE location after that update (think about change 20 JAVA_HOME settings after each JRE update), we will set up a symbolic link (shortcut in Windows vocabulary) for JRE. In the future, when we update JRE, we just update that link to get all other programs to use the new JRE:
    cd /usr/lib
    ln -s jre1.6.0_10 jre
    
  • In the future, when we need to set JAVA_HOME, we will set it to the link /usr/lib/jre, which always points to the latest JRE. So when getting a new JRE, we just update this link, and do not need to change JAVA_HOME for each programs we use. You can also set up a global JAVA_HOME for programs that respect it:
    echo "export JAVA_HOME=/usr/lib/jre" >> ~/.bashrc
    

    The above command will append one export line bash‘s configuration, which bash will read every time it starts. You may also do

    echo "export PATH=$JAVA_HOME/bin:$PATH" >> ~/.bashrc
    

    so that every time you type java on the command line it will use the latest Java.

Now it’s time to download and set Tomcat in almost the exact same way:

  • Download binary distribution of Tomcat from http://tomcat.apache.org/download-55.cgi. I would choose the .tar.gz file (also known as tarball):
    cd ~/Desktop
    wget http://download.nextag.com/apache/tomcat/tomcat-5/v5.5.27/bin/apache-tomcat-5.5.27.tar.gz
    
  • Now decompress it. The way to decompress a tarball is almost always:
    tar -xzf apache-tomcat-5.5.27.tar.gz
    

    This will give you a new folder (named apache-tomcat-5.5.27 in my case) that contains the tomcat program.

  • Now move the new folder to a permanent location. As for the JRE above, any location would work. I chose /opt, where people typically use to store relatively independent programs:
    mv apache-tomcat-5.5.27 /opt/tomcat-5.5
    

    This command moves the new tomcat folder to /opt and renames it to tomcat-5.5.

  • Similar to setting up JAVA_HOME, we will also need to set up CATALINA_HOME. So do:
    echo "export CATALINA_HOME=/opt/tomcat-5.5" >> ~/.bashrc
    echo "CATALINA_BASE=/opt/tomcat-5.5"  >> ~/.bashrc
    

That’s it. Close the terminal window and start a new one so that all new settings get loaded. You should now be able to start Tomcat by running familiar things like /opt/tomcat-5.5/bin/catalina.sh start

.

Read Full Post »

Daemonize python script

In developing a new feature in ChemMine, I encountered the problem of daemonizing a python script, for a second time.

Typically a HTTP request can be handled pretty fast. So unless I need to call some C or C-based code that can potentially crash the server, I would just process it inside the server process. For these C and C-based code, I would call it using os.system(...). But when the external process takes very long time, the python-based application server will wait and wait, and finally the client will timeout the request.

That’s why I need to daemonize these time-consuming processes, or in other words, run them in the “background” so that the application server can respond the client ASAP. Daemonizing them looks easy at the beginning, and there is a recipe for that purpose. The thing is that it makes it really hard to debug. Sometimes when you test it inside a terminal, it returns instantly just as it is successfully daemonized; but when you test it in the application server you would find the server will still wait for it to finish. So here is the most important thing to remember when you daemonize a process: close ALL fd’s at the OS level, that is, call os.close(...) to close fd from 0 up to 1023.

That would also makes it impossible to debug, because the standard output and error are also closed, and all error messages are discarded. So I would leave the standard error to open, until I iron out all the bugs.

Also, if you call further external programs in this daemonized processing using services like os.system(...), make sure you redirect their standard output and standard error. These are not available in the daemonized process, and the spawned process from it will not be able to inherit them and can potentially die when it fails to write something.

Read Full Post »

I have been using my workstation as a media PC for a while, hooking my HDTV to it to view DVD. However, the VGA output is far from being perfect: the signal is not so good probably due to imperfect connection, and picture is not middled.

A few days ago, I got a DVI-HDMI cable, and I plugged my MacBook to my TV with that cable. I was so impressed by the picture quality that I decided to go for DVI with my workstation right away.

So I got a cheap nVidia card with DVI output (GeForce FX 5200). It did not take me long to find a bunch of posts on line about outputting 720p pictures from nVidia card. However, although the TV worked and the pictures were crispy clear, there seemed to be some “over-scan” issue involved, and all the edges got trimmed.

I found a similar complain, and I thought this might be a dead end coz it seemed like this was a feature not a bug. However, I did a search on my TV model, and was able to find a repository of modelines. Isn’t it great?BTW, the magic for me is

 Option         "ModeValidation" "NoDFPNativeResolutionCheck" 

In the “Monitor” section. 

Read Full Post »

On Windows and UNIX

I was a Windows user, but got so damn tired of the poorly-made OS, which seemed to work but required SO MUCH babying. Yes, you can do this and you can do that, just follow the steps and click through dozens of unnecessary and stupidly-designed buttons, plus a few registry changes! I am a computer major, and I am surprised to see how a so-called user-friendly OS can have so many poorly-designed UI elements and so many confusing and counter-intutitve UI designs. And what is much worse, it needs constant care. I do not know the case now, but in those days of Windows XP SP1, I had to reinstall like every 3 months, and every time after I spent a few hours installing the system and the necessary drivers, I needed to be very careful to patch it WITHOUT connecting to the internet, and then spent much more time to reinstall all the applications. This sometimes required some very specific ordering: you must install A and then install B, and never do the reverse. Oh, one more thing, do not install too many applications, because it’s gonna slow down the system. Bull shit! I have thousands of programs on my Linux, and it still as fast as only a mini system. I have added more and more applications to my Mac, and no, it is not slowed down at all. You know what, you are also suggested to shut down the Windows machine once in a while, otherwise, it’s gonna slow down, too. Again, bull shit! My linux barely needs restart, and I only restart my Mac after the system update (about 3 or 4 times in half a year).

After the (difficult) transition about 3 years ago (when the kernel just went to 2.6), I never had to baby my system: have debian installed, use a single command to update when you remember, and you do your WORK. No reinstallation is necessary no matter how “old” the system has been. When you really need a reinstallation or you want to clone the system to a new machine, you do not need to spend a whole day installing all the applications: simply backup the list of packages you have installed and give it to aptitude and you are all set. For Mac, it’s the same case: back up your Applications folder and your home folder. With application bundle, there is nothing called “install a program”.

So in terms of being usable, I would say Windows is functional, but requires the user (or a friend of the user, or a paid agent) to spend more time to make it functional. Yes, you can run fast, but only if you run this and that tweak, restart frequently, and do the reinstallation once in a while; and yes, you can copy your partition image to avoid the long reinstallation process, but you have to follow the right procedure and you have to lose all your update to the system, because the backup is super static and super inflexible. Yes, you have shadow copy, but I bet 99.99% of the users do not know how to make it work to back up their data on home desktop, and I would vote for a simple, customized tar command or a rsync command.

I have to say Linux is hard for most user, but only if you intend to use it as a UNIX, which is meant to be flexible (which means “simple” in the UNIX world, coz you do not need much hack to make it suit your need). However, if you are a dumb user, you definitely can use it as a Windows, and then it’s even easier than Windows! People have been saying that Linux is hard to learn, because you have to type commands! No, you don’t. I have a friend who recently borrowed by laptop running Ubuntu, and she never asked me a single question, nor did she ever touched the terminal. As a “dumb” computer user, who would only need to browse, email, and type some documents, why the hell do you need the terminal? Administrating? No, because Ubuntu can do quite well with zero babying. Believe me, Windows needs much more care, and that is why some “more advanced” Windows users get so many calls for help from their friends.

You might say Linux has less applications. Who cares? There might be millions of Windows applications out there (I doubt, though), but how many of them you really need? OS X has much much less applications, but all the OS X users I know are quite quite satisfied with only a few dozens of applications. It’s not about quantity. It’s about quality and the coverage of your computing need.

And when it comes to coverage of computing need, you think Windows covers more? It might be the case, for SOME people. But what about the dumb users contributing 90% of the computer users, whose need is extremely simple? What about server environment, which requires a very high level of robustness, flexibility, scalability, and stability? What about web developers (like me), who desire a single command to setup the whole development environment that is almost the same as the production server environment? What about a computer scientists, who so much like Lisp, LaTeX, Metapost and so many other “geeky” stuff that is so damn hard to install on Windows? What about animation studios, who have been using Linux cluster to render the best-made animation movies ever made by the human beings? What about NASA scientists, who need a super-stable, super-customizable and super-bug-free system to send our spacecraft so far away that any human-interference is simply impossible? What about embedded system developers, who require a system that can be tailed to their need to fit into a 16MB flash card to power the hundreds of millions of smart devices found everywhere? What about the millions of people in Africa who only need basic software to get connected to the rest of the world and do not want to pay for a bloated, resource hungry and “over-qualified” Windows?

There is still much work to do for Linux. It is far from being perfect. But we are not behind any Windows system ever made in any sense. We are behind OS X in many usability senses, behind Solaris in some supercool features used in server environement, and behind AIX in handling system with huge number of processors. But I am very confident Linux will do very well, considering how fast it has been growing! Remember we are standing on the shoulders of a giant: UNIX.

Read Full Post »

I use the apache server hosted by my department in the university. Unfortunately, the apache server was set to use ascii as the default encoding. So even if you use charset in the meta tag, it is not gonna be respected by the browser.

But I have to use Chinese characters now and then. Previously, I translated Chinese characters to Unicode code and include it in the document using the &# hack. But it is only useful for page having a few characters.

There is a better way to do that: encode the charset information in the filename, and apache will output the proper encoding header based on that. This is possible thanks to the AddCharset lines in the conf file, such as the line below:

conf/httpd.conf:AddCharset UTF-8       .utf8

So if you have a file whose names ends in .html.utf8, apache will serve the page as if it is encoded in UTF-8 and will dump the proper character-encoding directive in the header accordingly.

Read Full Post »

Older Posts »