Linux Blog

Monitor Disk Usage for Large Copies

Filed under: Shell Script Sundays — Owen at 12:24 pm on Sunday, August 23, 2015

Note: There are probably better ways of doing this. I am currently coping a decent amount of data using rsync between a NAS drive and a locally mounted USB drive and wanted to monitor the process.

This works for the local drive:

owen@thelinuxblog:~$ df -h /dev/sdc1 | cut -d \  -f 11 | tail -n 1

I started figuring out the way to display the NAS total usage, which looks something like this:

df -h |grep NAS2 | cut -d % -f 1 | cut -d \  -f 35- | cut -d \  -f 1

Which is fine, but really is not needed and is bit overkill when I know the total amount that is going to get copied, so I simplified it down a bit to look like this:

owen@thelinuxblog:~$ echo `df -h /dev/sdc1 | cut -d \  -f 11 | tail -n 1` of 100G
41G of 100G

Then I wrote a loop to print out the usage every 5 minutes (300 seconds) and the date.

owen@gibson:~$ while [ $(echo `df -h /dev/sdf1 | cut -d \  -f 11 | tail -n 1` | sed 's/G//') -lt 100 ]; do echo `df -h /dev/sdf1 | cut -d \  -f 11 | tail -n 1` of 100G `date`; sleep 300; done;

This was a quick and dirty way of monitoring progress while passing time and now I’m able to keep an eye on things.

Shell Script Input Parameters

Filed under: Shell Script Sundays — TheLinuxBlog.com at 8:00 am on Sunday, March 1, 2015

When writing shell scripts, it is often useful to have some type of input parameters. There are a few ways to do this, and it turns out I’ve written about two of the main ways before. The first is the Shell Script to get User Input post which is essentially using read, and the second is Creating Script Parameters with GetOpts. I’ll also cover a quick and dirty way of getting input into a shell script. From this you’ll be able to see the differences and decide what input method is best for your situation.

(Read on …)

Random Numbers in Bash

Filed under: Shell Script Sundays — Owen at 8:00 am on Sunday, February 22, 2015

Going back to last weeks article on running things continuously, part of the use case was generating random numbers.

While using echo $RANDOM works, but it doesn’t work very well for having a minimum and max value. There are techniques to do this, but what I’ve found easier is to use another language of your choice and run that separately using expansion ( $() or “). I’m sure there is a way to do it in Ruby and Python and whatever language but since I’m familiar with PHP, I’ll use that in this example first:

1
2
3
for i in `seq 1 10`; do
echo `php -r 'echo rand(1, 100);'`
done;

This example iterates the loop 10 times, and echo’s a random number between 1 and 100. here is more information on PHP’s random function Note that this does require having php5-cli package installed.

If you don’t have PHP, but have perl you can use:

1
2
3
for i in `seq 1 10`; do
echo `perl 'print rand(100);'`
done;

That’s great and all, but perl uses what looks like floating numbers. To get an int, you have to cast to an int which adds additional typing:

1
2
3
for i in `seq 1 10`; do
echo `perl 'print int(rand(100));'`
done;

To me, this is easier but, if you don’t have PHP, Perl, Ruby, Python or whatever your choice, you could use bash (I always forget how, but here it is for reference:)

echo $[ 1 + $[ RANDOM % 100 ]]

Or if you have complex requirements, you could write a quick binary that does the same thing. Another alternative is to use another shell that you do have access to that happens to have a more robust RNG.

Bash Continuous Loop

Filed under: Quick Linux Tutorials,Shell Script Sundays — Owen at 8:00 am on Sunday, February 15, 2015

I came across a use case for running something continuously until breaking, rather than pick a large number and a for loop (for i in `seq 1 10000`) you can use a while loop with something that returns true, my first instinct was to use while echo, but that returns a blank line in-between each iteration, which could be useful, but if you don’t want any kind of spacing or notifications you can use while true as follows:

while true; do 
echo "STUFF"; 
done;

This technique could be used with sleeps to run something every x seconds, although when you start getting into that you’d probably be better off using cron.

Auto Clean-up Downloaded Files – Part IV

Filed under: Shell Script Sundays — TheLinuxBlog.com at 8:00 am on Sunday, February 8, 2015

In order to avoid the complex task of file comparisons on unknown files and types for what should be a simple task, I’ve made an executive decision to handle statistics. Hopefully I will not regret this should I decide to tackle file comparisons. For the cleaning up of Downloaded files there are really only a few statistics that I can think of that are meaningful to the task of deleting multiple files.

The first being counts, this could be the count of files in the folder, the number that matches the (?) find pattern and the total count of deleted files.
For the second “metric” disk space is a good one, but could be tricky to calculate given different file size types (byte, kilobyte, megabyte, etc.)
Timing is another option. We’ll skip how long I spent on this, as it is useless. I’d rather spend my time writing something that can be reused rather than wasting time pointing and clicking – although it would be interesting to calculate how much time was spent writing vs. the total run time we wont cover that. What we will cover is how long did it take to discover and delete the files? A fun number if for nothing else bug giggles.

Fortunately for us, utilities exist for all of these items and can be added fairly simply. We’ll start with the first and work our way down, regardless of how I feel about the last two items.
(Read on …)

Create your First Shell Script

Filed under: Shell Script Sundays — TheLinuxBlog.com at 8:00 am on Sunday, February 1, 2015

Shell scripts is a really useful skill to have. Creating a script to do avoid repeating a task can save you time in the long run. What a lot of people don’t know is that shell scripting is not hard, especially if you have some Linux command line experience. You can pretty much do anything you want with a script, and they are great for automating tasks. To create a shell script from a one-liner all you really have to do is:

echo "[your one-liner here]" > [your-script-name-here]

That will create your file, which you can then change the permissions on and move to your ~/bin/ directory. It would be wise to add $!/bin/bash as the first line if bash is your shell of choice, if you choose to distribute it. Once you’ve translated your one-liner into a file that can be executed you can start adding functionality to the script to ease use for future use.

Here are some explanations of basic functionality you can add to your scripts:
Variables

If-then-else statements If then else’s can be used to control flow or make decisions, they are very useful indeed.

read can be used to get input from the user, when dealing with an unknown or a variable

functions are great to use to store a particular set of instructions that can be called repeatedly without having to re-write the script.

loops do exactly as it sounds, loop. That is re-iterate over a variable, or repeat an instruction. You could use a loop to call a function over and over until a clause is met.

getopt’s is an instruction that can be used to read input from when the script is called. Often this is used to change functionality or display usage information.

redirection techniques are used throughout shell scripts and is one of the fundamentals of shell scripting. Master this and you’ll be piping and redirecting output to files and other programs in no time.

error checking and handling, often overlooked but shouldn’t be underestimated. Checking for errors before they happen can save time, and undesired results.

Many of these techniques are covered throughout this blog (feel free to browse or search), and there are many great online resources and books available (both free and paid) to help you with your journey. A good place to start for most of the techniques is in my Shell Scripting 101 article. Good luck!

CORRECTION – Using BASH to sort a book collection. ISBN Data Mining – Part 1

Filed under: Shell Script Sundays — TheLinuxBlog.com at 8:00 am on Sunday, January 25, 2015

This may be cheating but I consider it a break from the download cleanup script.

Amazingly I got a comment out of the blue from an article I wrote in 2007 about ISBN Data Mining. The comment, stated the fact that the script didn’t work. I did a little investigating and was able to find out why. I figured it was just old and didn’t work but that was not the case. Apparently when I formatted my posts for code, a while back it appears that some of the formatting got a bit fubar.

Luckily for me and Gabe I was able to find an old copy:

Here is his script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#!/bin/bash
ISBN="$1"
 
function fetchInfo () {
  ### Using barnesandnoble.com to fetch info...
  lynx -source "http://search.barnesandnoble.com/booksearch/isbninquiry.asp?ISBN=${ISBN}" |\
   tr -d '[:cntrl:]' | sed 's/></>\n</g' | while read -a lineArray; do
 
  ### Parsing book title.
  if [ "${lineArray[0]}" == "<h1" ]; then
   echo "4|Title: ${lineArray[*]}" | sed 's/<[^>]*>//g;s/ ([^)]*)//g'
 
  ### Parsing book author.
 elif [ "$(echo ${lineArray[*]} | grep "id=\"contributor\"")" ]; then
  echo "3|Author(s): ${lineArray[*]}" | sed 's/by //;s/<[^>]*>//g'
 
  ### Parsing additional data.
  elif [ "${lineArray[0]}" == "<li" ] &&
     [ "$(echo ${lineArray[*]} | grep -ve "bullet" -ve "title")" ]; then
   echo "1|${lineArray[*]}" | sed 's/<[^>]*>//g;s/:/: /;s/  / /'
  fi
 
  done | sort -ur | awk -F\| '{print $2}' | grep ":"
 
}
 
if [ "${#ISBN}" -ge "10" ]; then
   fetchInfo
fi

The script should be saved to a file and called as ./isbn.sh . Amazingly after all of these years it still works, I guess that’s one of the beauties of shell scripting. Here is the output:

owen@linuxblog:~$ isbn.sh 1593275676
Title:  How Linux Works: What Every Superuser Should Know    by  Brian Ward

Here is ISBN Data Mining – Part 2 although, I cannot guarantee that it works after 8 years.

Auto Clean-up Downloaded Files – Part III

Filed under: Shell Script Sundays — TheLinuxBlog.com at 8:00 am on Sunday, January 18, 2015

In Part 2, we added some read prompts to read which directory to run the script in and used some bash if/then/else statements to do some basic input validation. This week by using the creating script parameters with getopts article we’ll enhance the script a little to remove the echo from the example to allow the user to delete the files if they choose, defaulting to not remove files.
(Read on …)

Auto Clean-up Downloaded Files – Part II

Filed under: Shell Script Sundays — TheLinuxBlog.com at 8:00 am on Sunday, January 11, 2015

Last week I showed a one liner that could be used to remove duplicate files from your downloads folder. Using previous Shell Script Sunday articles, over the next few weeks we’ll add some additional functionality to make it a little more functional.

On its own the snippet is not that useful. The script will only run in the current directory. Adding a prompt to ask the user what directory to run in, or defaulting to the current directory would be a nice addition. Using the Shell Script to Get User Input article you’ll see that adding some prompts with read is pretty easy. Next we’ll use some bash if/then/else statements to read over that input to check for blank input and a check to make sure that it is a valid directory, exiting if it is not.
(Read on …)

Auto Clean-up Downloaded Files

Filed under: Shell Script Sundays — TheLinuxBlog.com at 11:37 am on Sunday, January 4, 2015

This week I went through my downloads folder, cleaning up erroneous files. In light of that I’ll share a quick tip on how to clean up the multiple copies of files that inevitably pile up. The issue is, when you save a file from firefox or Chrome, the next time you download the file, it just makes another copy with (1) or (2). I have a number of these on multiple systems, so hit the jump for a quick snippet, and explanation.
(Read on …)

xrandr – Set Primary Monitor

Filed under: Linux Hardware,Linux Software,Shell Script Sundays — Owen at 11:04 pm on Sunday, October 27, 2013

I had an issue with my dual monitor setup where my primary monitor was my second, but only in X. Rearranging the monitors in Gnome preferences did nothing to solve the problem. While not exactly a shell script, here is a one-liner to change your primary monitor with xrandr.

#!/bin/bash
xrandr --output DVI-0 --primary

The above uses xrandr to set the primary to DVI-0. I put this in my ~/bin folder, chmod’d and set it to start when Gnome starts. Problem solved!

Next Page »