Linux Blog

Run Levels in a Nutshell

Filed under: Quick Linux Tutorials — TheLinuxBlog.com at 9:03 am on Wednesday, March 11, 2009

Run levels in Linux are a great thing. Basically, a run level is by definition a configuration for a group of processes. The run levels and default run level is specified in /etc/inittab. Most Linux systems these days, with exception of a few boot into run level 5 which is generally a graphical user interface such as KDM or GDM. The others boot into run level 3 most servers will boot into this run level which is multi-user with networking but no X, and is many users preference.

To define what run level your system boots into by default you would edit the /etc/inittab file and edit the line similar to:

id:5:initdefault:

This is run level 5, if you wanted to switch to command line you’d change the 5 to 3 and vice versa.

If your not ready to make the jump yet but would like to check it out, you can (as root) use the command telinit to tell init to change run level. If you are in run level 5, try (be prepared to lose everything in X, as it will kill everything for you)

telinit 3

If you are doing maintenance, you may want to switch to level 1 which is single user mode. Level 2 on Fedora is the same as 3 except it doesn’t have NFS support.

Level 0 is halt and run level 6 is reboot which are the best ones to accidentally set as a default run level (trust me on this one.) For more information on the different run levels check out the man pages.

Linux Performance Boosting – Graphics

Filed under: General Linux — TheLinuxBlog.com at 12:24 pm on Thursday, July 31, 2008

Is your Linux box chugging along? Does it take a while for web pages to load or to boot up? Does your screen lag when you scroll a web page?

Well my friends, you’ve come to the right place. The issue with your Linux box performing poorly could be a graphics issue. A lot of distributions do not install the correct graphics drivers by default. Yes, your graphical user interface might work, but without the correct Linux graphic drivers you will not get the performance that you should be getting.

Linux has a default video driver called VESA, most video cards work with this driver but perform poorly. The reason behind this is VESA uses the CPU to do graphics processing and does not rely on the video card for 3D acceleration. If you have a 3D accelerated video card (most ATI / NVIDIA’s I will not go into detail here) then you might be able to offload graphics processing from your CPU onto your GPU.

Here is how to test to see if your frames per second if you are using the VESA standard driver:

 

[owen@LinuxBlog ~]$ glxgears
2623 frames in 5.0 seconds = 524.096 FPS
1677 frames in 5.0 seconds = 334.784 FPS
1948 frames in 5.0 seconds = 389.488 FPS
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server “:0.0″
after 19707 requests (19415 known processed) with 0 events remaining.

Now, the performance of this machine is quite good so the resulting frames per second (FPS) is not too shabby, but its not the best either. After installing the correct Linux video card driver for this Linux box lets take a look at what kind of performance I get:

[owen@LinuxBlog ~]$ glxgears
6179 frames in 5.0 seconds = 1235.749 FPS
6558 frames in 5.0 seconds = 1311.449 FPS
6489 frames in 5.0 seconds = 1295.583 FPS
XIO: fatal IO error 22 (Invalid argument) on X server “:0.0″
after 39 requests (39 known processed) with 0 events remaining.

As you can see from the results the graphics driver make a huge difference in the number of FPS I can achieve, but this is not the only benefit from using the correct 3D accelerated driver. When the correct driver is installed, the graphics card does most of the work therefore freeing up the CPU do other tasks. Its a win-win situation, so get your graphics card set up properly today!

Fetching Online Data From Command Line

Filed under: Shell Script Sundays — TheLinuxBlog.com at 6:12 pm on Sunday, December 2, 2007

Shell Scripts can come in handy for processing or re-formatting data that is available from the web. There are lots of tools available to automate the fetching of pages instead of downloading each page individually.

The first two programs I’m demonstrating for fetching are links and lynx. They are both shell browsers, meaning that they need no graphical user interface to operate.

Curl is a program that is used to transfer data to or from a server. It supports many protocols, but for the purpose of this article I will only be showing the http protocol.

The last method (shown in other blog posts) is wget. wget also fetches files from many protocols. The difference between curl and wget is that curl by default dumps the data to stdout where wget by default writes the file to the remote filename.

Essentially the following do the exact same thing:

owen@linux-blog-:~$ lynx http://www.thelinuxblog.com -source > lynx-source.html
owen@linux-blog-:~$ links http://www.thelinuxblog.com -source > links-source.html
owen@linux-blog-:~$ curl http://www.thelinuxblog.com > curl.html

Apart from the shell browser interface links and lynx also have some differences that may not be visible to the end user.
Both lynx and links re-format the code received into a format that they understand better. The method of doing this is -dump. They both format it differently so which ever one is easier for you to parse I would recommend using. Take the following:

owen@linux-blog-:~$ lynx -dump http://www.thelinuxblog.com > lynx-dump.html
owen@linux-blog-:~$ links -dump http://www.thelinuxblog.com > links-dump.html
owen@linux-blog-:~$ md5sum links-dump.html
8685d0beeb68c3b25fba20ca4209645e links-dump.html
owen@linux-blog-:~$ md5sum lynx-dump.html
beb4f9042a236c6b773a1cd8027fe252 lynx-dump.html

The md5 indicates that the dumped HTML is different.

wget does the same thing (as curl, links -source and lynx -source) but will create the local file with the the remote filename like so:

owen@linux-blog-:~$ wget http://www.thelinuxblog.com
–17:51:21– http://www.thelinuxblog.com/
=> `index.html’
Resolving www.thelinuxblog.com… 72.9.151.51
Connecting to www.thelinuxblog.com|72.9.151.51|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [text/html][ <=> ] 41,045 162.48K/s

17:51:22 (162.33 KB/s) – `index.html’ saved [41045]

owen@linux-blog-:~$ ls
index.html

Here is the result md5sum on all of the files in the directory:

owen@linux-blog-:~$ for i in $(ls); do md5sum $i; done;
a791a9baff48dfda6eb85e0e6200f80f curl.html
a791a9baff48dfda6eb85e0e6200f80f index.html
8685d0beeb68c3b25fba20ca4209645e links-dump.html
a791a9baff48dfda6eb85e0e6200f80f links-source.html
beb4f9042a236c6b773a1cd8027fe252 lynx-dump.html
a791a9baff48dfda6eb85e0e6200f80f lynx-source.html

Note: index.php is wget’s output.
Where ever the sum matches, the output is the same.

What do I like to use?
Although all of the methods (excluding dump) produce the same results I personally like to use curl because I am familiar with the syntax. It handles variables, cookies, encryption and compression extremely well. The user agent is easy to change. The last winning point for me is that it has a PHP extension which is nice to avoid using system calls to the other methods.