Linux Blog

Working with CSV files in Bash

Filed under: Shell Script Sundays — TheLinuxBlog.com at 1:25 pm on Sunday, October 19, 2008

CSV files are very common. Using them with Bash to aid in scripting can be very useful. Here are a some methods of using bash to work with CSV files. These are what I sometimes use when dealing with CSV files in bash, they may work for your CSV’s they may not. I accept no responsibility of the accuracy of these, I’m sure there are better ways of accomplishing the same tasks. If anybody finds other ways or knows better please comment and let me know. Don’t run these on production data, its always a good idea to make backups and test them before using in a production environment. Who knows how your precious data could become mangled due to a wrong number of quotes, terminations or bad regex. Jump for the post (Read on …)

Using BASH to sort a book collection. ISBN Data Mining – Part 2

Filed under: Shell Script Sundays — TheLinuxBlog.com at 8:49 pm on Sunday, September 23, 2007

Last weeks article used a bit of data mining to grab information on ISBN’s from Barnes and Noble and dumped the information into a text file. This week we’ll take those text files and format the data for input into a MySQL database. This tutorial assumes that you are familiar with MySQL.

The data that was written to the text files was in the following format:

Title: DNS and BIND, Third Edition
Title:
Author(s): Paul Albitz, Mike Loukides (Editor)
Publisher: O’Reilly Media, Incorporated
Pub. Date: September 1998
ISBN: 1565925122
Format: Paperback, 480pp
Edition Number: 3
Edition Description: Older Edition
Customer Rating:

This can easily be parsed and formatted for insertion into a MySQL table.

Firstly a database has to be created and then a table structure has to be decided upon. Since this example already has the titles, I’ll just use simular ones for the field names.
Create a database called book_info:

mysqladmin -u root create book_info;

and now create a table within the book_info database that is to contain all of the data:

CREATE TABLE `books` (
`ISBN` INT( 10 ) NOT NULL ,
`title` VARCHAR( 50 ) NOT NULL ,
`author` VARCHAR( 80 ) NOT NULL ,
`publisher` VARCHAR( 50 ) NOT NULL ,
`pub_date` VARCHAR( 50 ) NOT NULL ,
`format` VARCHAR( 30 ) NOT NULL ,
`edition` INT( 2 ) NOT NULL ,
INDEX ( `ISBN` )
) ENGINE = MYISAM ;

This isn’t the best MySQL table structure ever, but it will do for the purposes of this artice and besides it can always be tuned later.

With a directory full of .txt’s files from the last issue of shell script sundays the following can be ran to create a text file called bookQuery.sql.

for i in $(ls); do
echo “INSERT INTO \`book_info\`.\`books\` (\`ISBN\`, \`title\`, \`author\`, \`publisher\`, \`pub_date\`, \`format\`, \`edition\`) VALUES (‘$(cat $i | grep ISBN:)’, ‘$(cat $i | grep Title | sed ‘q’ | sed “s/’/\\\\’/”)’, ‘$(cat $i | grep Author\(s\): | sed “s/’/\\\\’/”)’, ‘$(cat $i | grep Publisher: |sed “s/’/\\\\’/”)’, ‘$(cat $i | grep Date:| sed “s/’/\\\\’/”)’, ‘$(cat $i | grep Format: | sed “s/’/\\\\’/”)’, ‘$(cat $i | grep Edition\ Number: | sed “s/’/\\\\’/”)’);” >> bookQuery.sql; done;

In turn this file can be imported into the table that was created by running the following:

mysql -u root < bookQuery.sql

Whats happening is pretty simple, cat reads the file and grep is used to find the line of text we want to import. After that sed is used. It is used twice in the title field. The first time is to use the first title from the text file. It is also used on every other appropriate field to escape the string so that it does not break the query. This example does not take the titles out of the line, but this could be easily done with cut.

Its easy to import text files into MySQL with shell scripting but the language I feel is best suited for this task is PHP. Some time I’ll go over how to do this with PHP.