Linux Blog

tweeting from the command line

Filed under: Shell Script Sundays — TheLinuxBlog.com at 12:25 pm on Sunday, June 29, 2008

This is a subject that has been covered time and time again but I don’t think that it will hurt one more time. Twitter is a very popular “Microblogging” site where you can constantly change your status to let those who “follow” you know what you are doing. Since I just signed up for twitter for The Linux Blog I figured I’d write this post on how I update my twitter feed. While I’m at it I might as well invite you over to my feed URL: http://twitter.com/linuxblog

So here is the script:

#!/bin/bash
echo "Enter Tweet: ";
read inputline; TWEET="$inputline";
curl -u user:password -s -F status="$inputline" http://twitter.com/statuses/update.xml http://twitter.com/account/end_session

This is a very basic twitter script, it does no error checking and probably doesn’t escape characters properly. None the less it works. The part that gets input from the shell is the following line:

read inputline; TWEET="$inputline";

If you’d like more information on how this works read this article: Shell Script to get user input

Curl is used to send the data to Twitter, to view curl tutorials and how-to’s visit the Curl Man Page which has a wealth of information at the bottom.

Until next time, happy tweeting!

Parse ifconfig data with shell scripts

Filed under: Shell Script Sundays — TheLinuxBlog.com at 2:25 pm on Sunday, June 8, 2008

This week in TheLinuxBlogs.com’s Shell Script Sundays article I’m going to show you how you can use basic UNIX commands to parse networking data. As always there are a number of different methods of achieving this, and I am in no way saying that this is absolutely the way you must do it, or the best way. Its just an example of how you can use shell scripts to your advantage.

Firstly most know that Linux uses the ifconfig command to get information about networking interfaces. If you issue the ifconfig followed by the interface name you get information just about that interface as follows:

# /sbin/ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:0E:35:7F:E2:98 inet addr:192.168.2.13 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::20e:35ff:fe7f:e298/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1146 errors:0 dropped:39 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23748601 (22.6 MiB) TX bytes:507899 (495.9 KiB) Interrupt:11 Base address:0x4000 Memory:fceff000-fcefffff

This information is not in the best format to parse (it has also been distorted by my blogging software.) To solve this problem we are going to search for the whitespaces at the beginning of each line and replaces them with commas. By doing this:

# /sbin/ifconfig eth1 | sed 's/          /,/'
eth1 Link encap:Ethernet HWaddr 00:0E:35:7F:E2:98 ,inet addr:192.168.2.13 Bcast:192.168.2.255 Mask:255.255.255.0 ,inet6 addr: fe80::20e:35ff:fe7f:e298/64 Scope:Link ,UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ,RX packets:1344 errors:0 dropped:39 overruns:0 frame:0 ,TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 ,collisions:0 txqueuelen:1000 ,RX bytes:23809630 (22.7 MiB) TX bytes:507899 (495.9 KiB) ,Interrupt:11 Base address:0x4000 Memory:fceff000-fcefffff

That gives us a nice comma after every line. In order to grab fields from this line the tr command can be used to replace spaces with pipes.

#/sbin/ifconfig eth1 | sed ‘s/ /,/’ | tr [:space:] \|
eth1||||||Link|encap:Ethernet||HWaddr|00:0E:35:7F:E2:98|||,inet|addr:192.168.2.13||Bcast:192.168.2.255||Mask:255.255.255.0|,inet6|addr:|fe80::20e:35ff:fe7f:e298/64|Scope:Link|,UP|BROADCAST|RUNNING|MULTICAST||MTU:1500||Metric:1|,RX|packets:1765|errors:0|dropped:39|overruns:0|frame:0|,TX|packets:1|errors:0|dropped:0|overruns:0|carrier:0|,collisions:0|txqueuelen:1000||,RX|bytes:23941275|(22.8|MiB)||TX|bytes:507899|(495.9|KiB)|,Interrupt:11|Base|address:0×4000|Memory:fceff000-fcefffff|||

Now that the fields are all delimited properly, lets use the cut command to grab a line from this. Since I am interested in the RX and TX bytes I’m going to grab data from line 8 by using the cut command as follows:

#/sbin/ifconfig eth1 | sed 's/          /,/' | tr [:space:] \| | cut -d , -f 8
RX|bytes:24014818|(22.9|MiB)||TX|bytes:507899|(495.9|KiB)|

That gave us a nice line of output which is easy to parse even further by using the cut command. You will notice the fields are delimited by a pipe (the | character) and are not always consistent since we replaced all spaces with a pipe. Take a look at the first two fields RX|bytes: This means that to get the RX bytes in bytes we need to cut yet again. Since I’m not to bothered about Bytes and the largest number is delimeted in fields 3 and 4 I will concentrate on those.

#/sbin/ifconfig eth1 | sed 's/          /,/' | tr [:space:] \| | cut -d , -f 8 | cut -d \| -f 3-4
(23.0|MiB)

This is a nice RX MiB output yet it has one last problem, the pipe between the characters. Sed can be used to replace this and any other characters if you wish. Just issue a sed find and replace like this:

#/sbin/ifconfig eth1 | sed 's/          /,/' | tr [:space:] \| | cut -d , -f 8 | cut -d \| -f 3-4 | sed 's/|/ /'
(23.0 MiB)

That looks good for now. If you would like more information on how to parse data regarding this post or any other you can always leave me a comment and I’ll try my best to help. Especially if we can post the results on TheLinuxBlog in another Shell Script Sundays Article. Thanks for reading The Linux Blog and come back soon!

Tar Archive Mischief.

Filed under: Shell Script Sundays — TheLinuxBlog.com at 1:22 am on Sunday, June 1, 2008

I ran into a problem the other day when I downloaded a particular tar.gz archive (Simple Machine Forums to be specific.) The problem was that despite how good SMF might be the developers did not put the files in a folder before they tar.gz’d it. This is not the only time I have ran into this problem, a lot of developers actually do it. Over time its become a habit to assume that its in a folder.

Here’s a solution to delete all files that were extracted from an archive:

tar xvzf [filename] > [filename]-filelist.txt
cat filelist.txt | while read i; do rm $i; done;

If you want to, you can do a dry run of the script by putting an echo in front of the rm statement and looking at the output. All files that you had e.g index.php will have most likely been overwritten from the extract in the first place so, it doesn’t hurt to delete them.

Once you have deleted all of the files from the archive you can simply create a directory and use the following to extract to it:

tar xvzf  [filename] -C [yourdir]

Using Bash Scripts in Web Applications

Filed under: Shell Script Sundays — TheLinuxBlog.com at 2:22 pm on Sunday, May 25, 2008

Using bash scripts for web applications is not exactly rocket science, nor is it necessarily the best idea in the world but it can be handy to do if you already have a bash script and want to use its functionality on the web. There are a couple of ways to use bash scripts on the web.

The first that I know of is as a CGI. All that you have to do for this one is create a cgi-bin or allow files with the extension .cgi to be executed this is done with apache in your httpd.conf file.

The Second is to use another scripting language to call the script. The easiest way for me is to use PHP. A system call to the script file can my made using the exec() function. Just make sure that the file has execute rights for the user that your web server runs as. Here is an example of using the exec() function in PHP:

$output = exec('/usr/local/bin/yourscript.sh');

The Third method is to use Server Side Includes to include the script. I personally am not familiar with setting up SSI’s but this is how you execute a command from within a SSI:

<!--#exec cmd="/usr/bin/date" -->

Which ever method you choose precautions have to be taken. Make sure that all inputs are sanitized so that a user cannot escape the command, pipe output to another file or manipulate the system in another way. In PHP it is easy to do this, but I can not speak for CGI’s or SSI’s. I hope this shows some insights as to how you can run bash scripts in your web application. If you have any other methods such as using mod_python or maybe tcl, please post them as a comment!

Bash Scripting Techniques

Filed under: Shell Script Sundays — TheLinuxBlog.com at 10:20 pm on Sunday, May 18, 2008

Here are some techniques that you can use in your bash scripts for finding and searching through files. Combined with other shell scripting techniques these can be very powerful.

Find all files in the current directory and print them:

find . -iname ".jpg"

Find all files that you have access to read with matching patern:

find / -iname "pattern"

Normally with grep text is matched and is case sensitive. Heres how to do a case insensitive search with grep:

cat [filename] | grep -i [match]

Finding and replacing text is easily done in bash with sed. This find and replace puts the contents into a new file:

 cat [filename] | sed 's/FIND/REPLACE/' > [new filename]

Finding the line number that a particular line of text is on is sometimes useful. Here is how to do it:

 cat [filename] | grep -n [match]

Looping over a file in bash and echoing the output is sometimes useful for the processing of text files. Heres how to do it:

cat [filename] | while read i; do echo $i; done

Thats about all the bash scripting techniques that I can currently think of for finding in files. I know there are a ton more that I use but its hard to write them all down at once. As I come up with them or solve a problem I’ll add them here. If you have any of your own, please leave them in the comments.

RSS Feeds

Filed under: Shell Script Sundays — Kaleb at 11:43 am on Sunday, April 20, 2008

The other day I was playing around with AwesomeWM and I wanted to have the newest article from digg.com/linux_unix to be displayed in the statusbar. I thought to myself:

“I roughly know how RSS works, so I should be able to do this.”

It turns out it was extremely easy to do.

First how does RSS work. It’s easy just an xml file that gets downloaded with a list of the articles on the site. Well that’s pretty simple so I wrote a little script that will do all the things I need.

First I needed to download the list

wget -c http://digg.com/rss/indexlinux_unix.xml

done with that. Now for what I wanted and to make it a little cleaner i moved this file:

mv indexlinux_unix.xml ~/.news

this way it was in a file that i can easily access.

After that it was just some simple editing of the file using sed. If you don’t know much about sed I suggest you read up on it. It is an extremely powerful tool for quick editing and scripting. For the editing of
the file it was actually quite simple:

cat ~/.news | grep “<title>” | sed -e ‘s/<[/]title>//’ | sed -e ‘s/<title>//’ | sed -e ’2,2 !d’

now no worries I will explain this its actually quite simple.

I will assume you know what cat ~/.news does but if you don’t, it outputs the contents of the file until the end of the file.

| grep “<title>” is a very important part of the command. As I looked at the xml file i realized that i would get a simple list of all the articles if I greped the title. However thats not all.

It was a very messy output with <title> at the beginning and </title> at the end. Nobody wants to look at that, what I wanted was the text in between. | sed -e ‘s/<[/]title>//’ will get rid of the </title> in the line. I am almost certain that | sed -e ‘s/<\/title>//’ would have done that same thing but you can test that if you want. It needs to be done like this because “/” is a special character so it needs to be escaped.

The next part | sed -e ‘s/<title>//’ should be self explanatory. Basically it just gets rid of the <title> in the line. So now using the first 3 pipes you will get a nice pretty list of all the articles.

This is not what we wanted though. We wanted the newest article. so that’s why we use | sed -e ’2,2 !d’. This command will cut out everything except the second line in the list. “Hmm but why the second line Kaleb?”
well because while creating this script I found that the first <title> line was the line that told me where I was getting this information from. So it was http://digg.com/linux_unix now I don’t want that. so I went with the second line for the first article. Easy right.

Now as I mentioned at the begining of this article, I wanted to make this give me a clickable link for the awesome statusbar. I will go over awesome piping later this week but basically the only information you will need. Is to go threw your xml file for your RSS feed and find out between what tags the link for your article is and use the above command to show you that link instead of the title then have Firefox open that
link (or whatever browser you use). It was a very simple thing to do.

Kaleb Porter

porterboy55@yahoo.com

http://kpstuff.servebeer.com (website currently down)

Suspend Scripts for the Toshiba Tecra M2

Filed under: Quick Linux Tutorials,Shell Script Sundays — TheLinuxBlog.com at 12:15 am on Sunday, March 30, 2008

As you may know if you are a regular reader I own a Toshiba Tecra M2. One of the things that annoyed me was I had to turn the brightness up every time my computer came out of standby mode. A fix for this is to adjust the brightness every time the computer comes out of standby mode.

The script is intended to be run under cron. I have mine set up to suspend after 5 minutes of the lid being closed.

if [ $(cat /proc/acpi/button/lid/LID/state | sed 's/state:      //') == "closed" ]; then
VAR=$(cat /proc/acpi/toshiba/lcd | sed 's/brightness:              //' grep -v levels);
sudo su -c "echo mem > /sys/power/state";
if [ $VAR -eq 1 ]; then
ACTION=ADD;
elif [ $VAR -eq 7 ]; then
ACTION=SUB;
else
ACTION=ADD;
fi;
if [ $ACTION == "ADD" ]; then
VAR=$(($VAR + 1));
else
VAR=$(($VAR - 1));
fi;
sudo su -c "echo brightness:$(echo $VAR) > /proc/acpi/toshiba/lcd";
fi;

I run this with the following cron entry:

*/5 * * * * sh hibernate.sh

The script first checks the current brightness. If the brightness is currently 1 or 7 it adjusts the mathematic operation so that when the laptop is opened the brightness is adjusted. Basically if the brightness is one, it adds one. If the brightness is 7 or any other value it subtracts one. This is currently working out quite well for me. I don’t know how useful this is to any body else, unless you happen to have a Toshiba that is doing the same thing but it should give you a good overall idea of how to perform basic mathematic operations in bash.

Using wc and How To Count Table Rows

Filed under: Shell Script Sundays — TheLinuxBlog.com at 1:07 pm on Sunday, March 9, 2008

I made this little script to check how many packages were available on the web from the Cygwin Package Repository located at http://www.cygwin.com/packages

Its a one liner but it does its job well.

CYGLIST=$(curl http://www.cygwin.com/packages/ | grep \&lt;tr | grep ball | wc -l); echo $CYGLIST;

All the above is doing is creating a variable called CYGLIST that is the result of grabbing the cygwin.com/packages/ page, grepping all of the TR’s that also have the word “ball” in it (for the image) and then using the wc -l (L) command to count how many results are found. Then the list is echoed out.

wc is a very useful command for printing newline, word and byte counts. This is a good example of how to use wc to count lines in a shell script. wc can also be used to print all of these values in one line of a file.  The syntax is below:

<p align="left">bash-3.1# wc file.txt
9  20 184 file.txt

The above shows the number of lines in the file.txt, it shows how many words are in the file and also how many bytes. In my first example wc uses the -l switch to display the number of lines. This script can also be used with a little bit of bash math to calculate how many items are in an HTML list. I’m working on a script that automatically does this, when its finished I will be sure to post it here on The Linux Blog.

timestamps in the shell

Filed under: Shell Script Sundays — TheLinuxBlog.com at 12:02 pm on Sunday, March 2, 2008

Time and Date functions are very important when writing shell scripts. I mostly use them for logging reasons, for example to know when something was run last. As much as I dislike time stamps they are still used (at least for now) and therefore I am giving and example.

I am unsure of a way to get a time stamp in Bash. If you have PHP installed you can do the following to get a UNIX time stamp (suitable for inserting into a DB):

php -r 'echo time()."\n";'

php -r executes PHP code inside quotes. the time() function just creates a time stamp for the current time. If you need to format a string based on a time stamp you can use the date() function. Here is an example of turning a timestamp into a readable string:

bash-3.1$ php -r 'echo date("l dS \of F Y h:i:s A","1204476759")."\n";'
Sunday 02nd of March 2008 11:52:39 AM

Take a look at the date() function page on php.net if you wish to use this method of using time stamps in the shell.

If any one has any other methods of using time stamps in the shell or needs any help as usual leave a comment :)

PHP Script To Log Into cPanel

Filed under: Shell Script Sundays — TheLinuxBlog.com at 3:47 am on Sunday, February 24, 2008

Earlier this week I made a script that logs into cPanel to check statistics. Basically if you have a webhost that runs cPanel and you wish to log into cPanel for some reason then this script is for you. Once you are logged in you can basically do anything you would want to do. For example my specific use was to log into my cPanel nightly and parse some data provided by AWStats. But with some modification this script could automate anything you can do by hand.

Since this is more of a web project for me I decided to write my cPanel login script in PHP. I found a PHP class to login here. curl is used to fetch the URL’s and I parse the data using PCRE regular expressions. The statistics code is still very basic but I thought I would post it for those interested and what better place then The Linux Blog’s Shell Script Sundays column?

Onto the script.

It consists of three scripts each with their own purpose in run time. They are as follows:

cPanel.php – This script does all of the dirty work in connecting to cPanel and fetching the pages. I modified this from the original a little
class.mysql.php – Just a generic data base handler. MySQL configuration information is stored in here.
login.php – This is the script that starts off the process. I named it login.php instead of index.php so that I do not have it run as the default page in my web browser. login.php also does all of the parsing of the data and is where the data gets inserted into the database.

To run the script edit login.php and then you can either put it in your PHP powered web server directory or run it from the command line by doing:

php login.php

The output should be as follows:

Num: 0 Date: 2454521 uniques: X visits: X visits per visitor: (Xvisits/visitor) pages: X pages per visitor: (XPages/Visit) hits: X hits per visitor: (XHits/Visit) bandwidth: X GB bandwidth per visitor: (XMB/Visit)

Feel free to modify this as you wish. If any questions can be answered I’d be happy to do so. I’d like to hear what people are using this for too, so drop a comment!

Download the PHP cPanel Login Script

cURL Gotcha’s

Filed under: Shell Script Sundays — TheLinuxBlog.com at 4:31 pm on Sunday, February 17, 2008

I’ve been using cURL for a couple of projects recently and I thought I would just post a couple of the “Gotcha’s”

Feel free to add to the list by leaving a comment.

1) User Agent. Certain websites especially Google like to block the use of curl because some people use curl for abusive reasons. This can be fixed by changing your user agent.

User Agents can be switched with curl by using the -A or –user-agent switch.

To change your user agent to Internet Explorer 7 or IE7 on Vista do the following when requesting a page:

curl -A "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)" [URL]

Should you want to change your user agent to Netscape 4.8 or Opera 9.2 on Vista you can use the following agent strings:

Netscape user agent string

Mozilla/4.8 [en] (Windows NT 6.0; U)

Opera user agent string

Opera/9.20 (Windows NT 6.0; U; en)

2)  Separate post data with ampersands or put spaces in between your -d’s  This one got me once.

3) Don’t try to post and try to use -G for get requests if you want to post data.

-G makes everything that is in -d get put into a get request instead of a post. Use the following format if you want to post and use get requests.

curl -d "post=data&amp;more_post=moredata" urlgoeshere.php?get=getdata

I’ll post more of these as I remember them, again as stated above if something has got you with curl post them here and I’ll add it to the list with a link to your site! Thats all for this week – Owen.

« Previous PageNext Page »