Linux Blog

Automatically reconnecting to a host

Filed under: Shell Script Sundays — TheLinuxBlog.com at 9:15 pm on Sunday, August 17, 2008

If you follow me on Twitter: http://www.twitter.com/LinuxBlog then you may know that I regularly update a bunch of Linux PC’s and servers. Now, since I’m sort of lazy and don’t like manually doing anything I don’t have to I thought I’d post the one liner I use to automatically reconnect to a host.

while ! ping -W 1 -c 1 [hostname or IP] 2>&1 >/dev/null; do true; done && sleep 15; ssh [user]@[hostname or IP]

This script uses the ping command to ping the server once (-c 1) with the timeout of 1 second (-W 1) ping a host or IP with a timeout of one second. Once the ping loop is broken (ping returns true) I let it sleep for 15 seconds to enable SSH to come up. Then the inevitable happens. I use SSH to reconnect to the host.

There you have it, a quick way to reconnect to a host without typing the command or pressing the up arrow every time. Enjoy!

Automated scanning with the shell

Filed under: Shell Script Sundays — TheLinuxBlog.com at 9:25 am on Sunday, July 27, 2008

I recently needed to scan a lot of images on my desktop PC. Unfortunately I am not the owner of an automatic document feed printer, and if I were it wouldn’t have helped this time because the documents I needed to scan were not feed able. XSANE is a great way to scan documents visually in Linux. Its not the easiest to use, but it has plenty of options. Part of the SANE package is scanimage, scanimage can be used from the shell.

The first thing that I did was a few test images with scanimage. I quickly found out that scanimage outputs in pnm format, and at a high resolution if the correct options are used. Once I found out the good options for my scanner (scanimage –resolution 400 > file.pnm) I wrote a quick shell script to scan up to 1000 times or until I don’t give the script any input. To do this, I used a combination of snippets that can be found in this blog column.

Here is a direct link to the script, and the shell script source below

#!/bin/bash
for i in `seq 1 1000`; do
 
#get input line
read inputline;
 
if [ $inputline ]; then
 
#Process Scanned Image in BG
echo Scanning Pg$i;
scanimage --resolution 400 > Pg$i.pnm;
echo Next;
else
exit
fi
 
done;

To use it all I do is execute the script, and I get to scan up to 1000 documents providing I type something after it prompts “Next”, and then hit enter. Once I was done scanning, I just hit enter to stop the script execution and then moved on to manipulating the images with the shell.

Hope this shell script scanning script is useful, if it is then drop me a comment, or if you have any suggestions or it was not at all helpful still drop me a comment.

Timing your reboots with Twitter support!

Filed under: Shell Script Sundays — TheLinuxBlog.com at 12:01 am on Sunday, July 20, 2008

Firstly, I’d like to start off by saying that all of the concepts in this post should have been covered in other posts, so I will not go into great detail on the specifics of this script. If you need to know more information about any of the commands, check the man page section at the bottom of this page, from the man pages will be examples of other posts covering similar topics.

The purpose of this script for me was to time my reboot times. It could be modified to log the time it takes to replace hardware or add memory, but thats another post. Since we are logging reboot times, we are (hopefully) dealing with small numbers and therefore don’t have to deal with formatting time (at least not for now.)

The script should work on multiple systems that have bash. There is nothing too special about it. It uses the reboot command so the user this is launched as will have to have access to that command. You put the script in the users bin directory and chmod it. The user must also have write access to this. Also, they must have write access to their home directory, but this should not be a problem for most. Line 8 of the script needs to be changed to the user you plan on running this as.

After that test that the timereboot command works by typing timereboot:

[owen@linuxblog ~]$ timereboot
Usage: /home/linuxblog/bin/timereboot {time|ttime|back}

Once that is done, thats a pretty good indication that the script is working. Next, I suggest commenting out the reboot command on line #25 if this is a critical mission and you don’t want to reboot multiple times to get it working. If not go ahead and try the time command. Once your system is back up and your logged in you type the “timereboot back” command, it will then tell you the time taken since your system was done.

Once you have verified that the time works, you can go ahead and add it to your bashrc to automatically perform the action once your logged in. All you need to do is add a line like this:

home/linuxblog/bin/timereboot back

Now, if you want you can try again and see the results automatically.

“Thats great, but how do I post it to twitter?”

Well, there is one last thing that you have to do to get your reboot time posted to twitter. Edit line 55 and change to your twitter username and password. Do the same thing as before to reboot, but use the ttime parameter to log to twitter.

This script, does not post to twitter that you are rebooting (although it could) nor does it format the time, but it works and should give you a starting point if you are interested in doing this. It doesn’t really serve a real purpose other than to inform people how quickly or how slow you reboot. Also, please note that this is not a start up time. This times from when you issue the command until you issue the back command, or log in using the .bashrc method.

If you have any questions about this script or any other idea’s let me know and I’ll be happy to help or implement them for fun.

And here is the Twitter reboot script

Adding a service in Fedora

Filed under: Shell Script Sundays — TheLinuxBlog.com at 2:08 pm on Sunday, July 6, 2008

This week on Shell Script Sundays I’ll show you how to add a service to Fedora. This is very useful if you don’t happen to use yum for every service you want to run, and xinetd doesn’t really work for you.

Firstly there are three main parts to a Fedora service script. Start, Stop and Restart. They are pretty much self explanatory, but you don’t have to worry about the restart action since all it does is stop’s and then starts the service.

Without further ado here is the script:

#!/bin/bash
#
# Fedora-Service Update notification daemon
#
# Author:       TheLinuxBlog.com
#
# chkconfig:    1000 50 50
#
# description:  This is a test Fedora Service \
#               Second line of the fedora service template.
# processname:  FedoraTemplate
#
RETVAL=0;
 
start() {
echo "Starting Fedora-Service"
}
 
stop() {
echo "Stopping Fedora-Service"
}
 
restart() {
stop
start
}
 
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo $"Usage: $0 {start|stop|restart}"
exit 1
esac
 
exit $RETVAL

Now that you have a template for the script, you will want to modify it for your service. You need to keep the header at the top. This is how the Fedora Knows about your service. The three numbers indicate what order the scripts should start up and shut down in. The first seems to be a identification number and the other two are the startup and shutdown order. These can be adjusted depending on when you want the service to start up.Once you are done modifying the script put the script in /etc/init.d/

To make sure it works you can call it with service using the following actions:

service start
service stop
service restart

If all of the actions work, you are ready to add the service to the system. If you use the setup command as root it seems to do this step for you, but if you just want to add the service quickly without bothering to scramble through configuration menu’s you can do the following:

chkconfig --add [script name]

If you want the service to start automatically at boot up you can use ntsysv. For more information read my post on Managing Services on Fedora

tweeting from the command line

Filed under: Shell Script Sundays — TheLinuxBlog.com at 12:25 pm on Sunday, June 29, 2008

This is a subject that has been covered time and time again but I don’t think that it will hurt one more time. Twitter is a very popular “Microblogging” site where you can constantly change your status to let those who “follow” you know what you are doing. Since I just signed up for twitter for The Linux Blog I figured I’d write this post on how I update my twitter feed. While I’m at it I might as well invite you over to my feed URL: http://twitter.com/linuxblog

So here is the script:

#!/bin/bash
echo "Enter Tweet: ";
read inputline; TWEET="$inputline";
curl -u user:password -s -F status="$inputline" http://twitter.com/statuses/update.xml http://twitter.com/account/end_session

This is a very basic twitter script, it does no error checking and probably doesn’t escape characters properly. None the less it works. The part that gets input from the shell is the following line:

read inputline; TWEET="$inputline";

If you’d like more information on how this works read this article: Shell Script to get user input

Curl is used to send the data to Twitter, to view curl tutorials and how-to’s visit the Curl Man Page which has a wealth of information at the bottom.

Until next time, happy tweeting!

Parse ifconfig data with shell scripts

Filed under: Shell Script Sundays — TheLinuxBlog.com at 2:25 pm on Sunday, June 8, 2008

This week in TheLinuxBlogs.com’s Shell Script Sundays article I’m going to show you how you can use basic UNIX commands to parse networking data. As always there are a number of different methods of achieving this, and I am in no way saying that this is absolutely the way you must do it, or the best way. Its just an example of how you can use shell scripts to your advantage.

Firstly most know that Linux uses the ifconfig command to get information about networking interfaces. If you issue the ifconfig followed by the interface name you get information just about that interface as follows:

# /sbin/ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:0E:35:7F:E2:98 inet addr:192.168.2.13 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::20e:35ff:fe7f:e298/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1146 errors:0 dropped:39 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23748601 (22.6 MiB) TX bytes:507899 (495.9 KiB) Interrupt:11 Base address:0x4000 Memory:fceff000-fcefffff

This information is not in the best format to parse (it has also been distorted by my blogging software.) To solve this problem we are going to search for the whitespaces at the beginning of each line and replaces them with commas. By doing this:

# /sbin/ifconfig eth1 | sed 's/          /,/'
eth1 Link encap:Ethernet HWaddr 00:0E:35:7F:E2:98 ,inet addr:192.168.2.13 Bcast:192.168.2.255 Mask:255.255.255.0 ,inet6 addr: fe80::20e:35ff:fe7f:e298/64 Scope:Link ,UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ,RX packets:1344 errors:0 dropped:39 overruns:0 frame:0 ,TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 ,collisions:0 txqueuelen:1000 ,RX bytes:23809630 (22.7 MiB) TX bytes:507899 (495.9 KiB) ,Interrupt:11 Base address:0x4000 Memory:fceff000-fcefffff

That gives us a nice comma after every line. In order to grab fields from this line the tr command can be used to replace spaces with pipes.

#/sbin/ifconfig eth1 | sed ‘s/ /,/’ | tr [:space:] \|
eth1||||||Link|encap:Ethernet||HWaddr|00:0E:35:7F:E2:98|||,inet|addr:192.168.2.13||Bcast:192.168.2.255||Mask:255.255.255.0|,inet6|addr:|fe80::20e:35ff:fe7f:e298/64|Scope:Link|,UP|BROADCAST|RUNNING|MULTICAST||MTU:1500||Metric:1|,RX|packets:1765|errors:0|dropped:39|overruns:0|frame:0|,TX|packets:1|errors:0|dropped:0|overruns:0|carrier:0|,collisions:0|txqueuelen:1000||,RX|bytes:23941275|(22.8|MiB)||TX|bytes:507899|(495.9|KiB)|,Interrupt:11|Base|address:0x4000|Memory:fceff000-fcefffff|||

Now that the fields are all delimited properly, lets use the cut command to grab a line from this. Since I am interested in the RX and TX bytes I’m going to grab data from line 8 by using the cut command as follows:

#/sbin/ifconfig eth1 | sed 's/          /,/' | tr [:space:] \| | cut -d , -f 8
RX|bytes:24014818|(22.9|MiB)||TX|bytes:507899|(495.9|KiB)|

That gave us a nice line of output which is easy to parse even further by using the cut command. You will notice the fields are delimited by a pipe (the | character) and are not always consistent since we replaced all spaces with a pipe. Take a look at the first two fields RX|bytes: This means that to get the RX bytes in bytes we need to cut yet again. Since I’m not to bothered about Bytes and the largest number is delimeted in fields 3 and 4 I will concentrate on those.

#/sbin/ifconfig eth1 | sed 's/          /,/' | tr [:space:] \| | cut -d , -f 8 | cut -d \| -f 3-4
(23.0|MiB)

This is a nice RX MiB output yet it has one last problem, the pipe between the characters. Sed can be used to replace this and any other characters if you wish. Just issue a sed find and replace like this:

#/sbin/ifconfig eth1 | sed 's/          /,/' | tr [:space:] \| | cut -d , -f 8 | cut -d \| -f 3-4 | sed 's/|/ /'
(23.0 MiB)

That looks good for now. If you would like more information on how to parse data regarding this post or any other you can always leave me a comment and I’ll try my best to help. Especially if we can post the results on TheLinuxBlog in another Shell Script Sundays Article. Thanks for reading The Linux Blog and come back soon!

Tar Archive Mischief.

Filed under: Shell Script Sundays — TheLinuxBlog.com at 1:22 am on Sunday, June 1, 2008

I ran into a problem the other day when I downloaded a particular tar.gz archive (Simple Machine Forums to be specific.) The problem was that despite how good SMF might be the developers did not put the files in a folder before they tar.gz’d it. This is not the only time I have ran into this problem, a lot of developers actually do it. Over time its become a habit to assume that its in a folder.

Here’s a solution to delete all files that were extracted from an archive:

tar xvzf [filename] > [filename]-filelist.txt
cat filelist.txt | while read i; do rm $i; done;

If you want to, you can do a dry run of the script by putting an echo in front of the rm statement and looking at the output. All files that you had e.g index.php will have most likely been overwritten from the extract in the first place so, it doesn’t hurt to delete them.

Once you have deleted all of the files from the archive you can simply create a directory and use the following to extract to it:

tar xvzf  [filename] -C [yourdir]

Using Bash Scripts in Web Applications

Filed under: Shell Script Sundays — TheLinuxBlog.com at 2:22 pm on Sunday, May 25, 2008

Using bash scripts for web applications is not exactly rocket science, nor is it necessarily the best idea in the world but it can be handy to do if you already have a bash script and want to use its functionality on the web. There are a couple of ways to use bash scripts on the web.

The first that I know of is as a CGI. All that you have to do for this one is create a cgi-bin or allow files with the extension .cgi to be executed this is done with apache in your httpd.conf file.

The Second is to use another scripting language to call the script. The easiest way for me is to use PHP. A system call to the script file can my made using the exec() function. Just make sure that the file has execute rights for the user that your web server runs as. Here is an example of using the exec() function in PHP:

$output = exec('/usr/local/bin/yourscript.sh');

The Third method is to use Server Side Includes to include the script. I personally am not familiar with setting up SSI’s but this is how you execute a command from within a SSI:

<!--#exec cmd="/usr/bin/date" -->

Which ever method you choose precautions have to be taken. Make sure that all inputs are sanitized so that a user cannot escape the command, pipe output to another file or manipulate the system in another way. In PHP it is easy to do this, but I can not speak for CGI’s or SSI’s. I hope this shows some insights as to how you can run bash scripts in your web application. If you have any other methods such as using mod_python or maybe tcl, please post them as a comment!

Bash Scripting Techniques

Filed under: Shell Script Sundays — TheLinuxBlog.com at 10:20 pm on Sunday, May 18, 2008

Here are some techniques that you can use in your bash scripts for finding and searching through files. Combined with other shell scripting techniques these can be very powerful.

Find all files in the current directory and print them:

find . -iname ".jpg"

Find all files that you have access to read with matching patern:

find / -iname "pattern"

Normally with grep text is matched and is case sensitive. Heres how to do a case insensitive search with grep:

cat [filename] | grep -i [match]

Finding and replacing text is easily done in bash with sed. This find and replace puts the contents into a new file:

 cat [filename] | sed 's/FIND/REPLACE/' > [new filename]

Finding the line number that a particular line of text is on is sometimes useful. Here is how to do it:

 cat [filename] | grep -n [match]

Looping over a file in bash and echoing the output is sometimes useful for the processing of text files. Heres how to do it:

cat [filename] | while read i; do echo $i; done

Thats about all the bash scripting techniques that I can currently think of for finding in files. I know there are a ton more that I use but its hard to write them all down at once. As I come up with them or solve a problem I’ll add them here. If you have any of your own, please leave them in the comments.

RSS Feeds

Filed under: Shell Script Sundays — Kaleb at 11:43 am on Sunday, April 20, 2008

The other day I was playing around with AwesomeWM and I wanted to have the newest article from digg.com/linux_unix to be displayed in the statusbar. I thought to myself:

“I roughly know how RSS works, so I should be able to do this.”

It turns out it was extremely easy to do.

First how does RSS work. It’s easy just an xml file that gets downloaded with a list of the articles on the site. Well that’s pretty simple so I wrote a little script that will do all the things I need.

First I needed to download the list

wget -c http://digg.com/rss/indexlinux_unix.xml

done with that. Now for what I wanted and to make it a little cleaner i moved this file:

mv indexlinux_unix.xml ~/.news

this way it was in a file that i can easily access.

After that it was just some simple editing of the file using sed. If you don’t know much about sed I suggest you read up on it. It is an extremely powerful tool for quick editing and scripting. For the editing of
the file it was actually quite simple:

cat ~/.news | grep “<title>” | sed -e ‘s/<[/]title>//’ | sed -e ‘s/<title>//’ | sed -e ‘2,2 !d’

now no worries I will explain this its actually quite simple.

I will assume you know what cat ~/.news does but if you don’t, it outputs the contents of the file until the end of the file.

| grep “<title>” is a very important part of the command. As I looked at the xml file i realized that i would get a simple list of all the articles if I greped the title. However thats not all.

It was a very messy output with <title> at the beginning and </title> at the end. Nobody wants to look at that, what I wanted was the text in between. | sed -e ‘s/<[/]title>//’ will get rid of the </title> in the line. I am almost certain that | sed -e ‘s/<\/title>//’ would have done that same thing but you can test that if you want. It needs to be done like this because “/” is a special character so it needs to be escaped.

The next part | sed -e ‘s/<title>//’ should be self explanatory. Basically it just gets rid of the <title> in the line. So now using the first 3 pipes you will get a nice pretty list of all the articles.

This is not what we wanted though. We wanted the newest article. so that’s why we use | sed -e ‘2,2 !d’. This command will cut out everything except the second line in the list. “Hmm but why the second line Kaleb?”
well because while creating this script I found that the first <title> line was the line that told me where I was getting this information from. So it was http://digg.com/linux_unix now I don’t want that. so I went with the second line for the first article. Easy right.

Now as I mentioned at the begining of this article, I wanted to make this give me a clickable link for the awesome statusbar. I will go over awesome piping later this week but basically the only information you will need. Is to go threw your xml file for your RSS feed and find out between what tags the link for your article is and use the above command to show you that link instead of the title then have Firefox open that
link (or whatever browser you use). It was a very simple thing to do.

Kaleb Porter

porterboy55@yahoo.com

http://kpstuff.servebeer.com (website currently down)

Suspend Scripts for the Toshiba Tecra M2

Filed under: Quick Linux Tutorials,Shell Script Sundays — TheLinuxBlog.com at 12:15 am on Sunday, March 30, 2008

As you may know if you are a regular reader I own a Toshiba Tecra M2. One of the things that annoyed me was I had to turn the brightness up every time my computer came out of standby mode. A fix for this is to adjust the brightness every time the computer comes out of standby mode.

The script is intended to be run under cron. I have mine set up to suspend after 5 minutes of the lid being closed.

if [ $(cat /proc/acpi/button/lid/LID/state | sed 's/state:      //') == "closed" ]; then
VAR=$(cat /proc/acpi/toshiba/lcd | sed 's/brightness:              //' grep -v levels);
sudo su -c "echo mem > /sys/power/state";
if [ $VAR -eq 1 ]; then
ACTION=ADD;
elif [ $VAR -eq 7 ]; then
ACTION=SUB;
else
ACTION=ADD;
fi;
if [ $ACTION == "ADD" ]; then
VAR=$(($VAR + 1));
else
VAR=$(($VAR - 1));
fi;
sudo su -c "echo brightness:$(echo $VAR) > /proc/acpi/toshiba/lcd";
fi;

I run this with the following cron entry:

*/5 * * * * sh hibernate.sh

The script first checks the current brightness. If the brightness is currently 1 or 7 it adjusts the mathematic operation so that when the laptop is opened the brightness is adjusted. Basically if the brightness is one, it adds one. If the brightness is 7 or any other value it subtracts one. This is currently working out quite well for me. I don’t know how useful this is to any body else, unless you happen to have a Toshiba that is doing the same thing but it should give you a good overall idea of how to perform basic mathematic operations in bash.

« Previous PageNext Page »