Linux Blog

Linux Wireless Morals

Filed under: General Linux — TheLinuxBlog.com at 2:40 pm on Monday, April 7, 2008

Is it moral for some one who uses Linux to borrow some ones Wireless?

Lets say you are at a hot spot and you need to jump online really quickly but some the internet house your are at charges for wireless. Is it moral to connect to some one else’s wifi?

Maybe you just moved and the internet has not arrived at your house yet. Is it moral to use a neighbors for an undefined amount of time before you are settled in and have the internet set up?

If you answered yes to any of these questions then you may either be cheap (like me) or have low morals. Either way, there are ways to protect your identity and information while borrowing wireless by using Linux.

A good way to protect your self while borrowing some one else’s wireless is to tunnel with SSH. You can use a squid proxy on a port and set up your SSH to forward a local port to the squid server. Once this is done your unencrypted http traffic is now tunneled through an encrypted SSH session. If the person who owns the wireless network (or any one else) were to sniff the packets they would just see the destination address and not the full traffic information.

Use SSH for everything that is unencrypted. SSH to a known host and use these protocols here. FTP and POP are good examples of protocols that can be used by SSH. Don’t use an instant messenger through the internet, it is very easy to sniff the packets. Some times a friend may give out incriminating information which could get you in trouble.

A good device to help with protecting your information while using some one else’s wireless is DD-WRT. Once installed on a supported device it has many functions that can be used. Bridging mode, VPN passthru, advanced routing can all be used to protect your information. DD-WRT would be especially good if set up as a bridge to the other persons wireless. You could use a NAT firewall to hid how many devices you really have connected and change the mac addresses of the clients.

In the future I’ll be showing you more ways to hide your privacy while using wireless technologies so stay tuned!

Fetching Online Data From Command Line

Filed under: Shell Script Sundays — TheLinuxBlog.com at 6:12 pm on Sunday, December 2, 2007

Shell Scripts can come in handy for processing or re-formatting data that is available from the web. There are lots of tools available to automate the fetching of pages instead of downloading each page individually.

The first two programs I’m demonstrating for fetching are links and lynx. They are both shell browsers, meaning that they need no graphical user interface to operate.

Curl is a program that is used to transfer data to or from a server. It supports many protocols, but for the purpose of this article I will only be showing the http protocol.

The last method (shown in other blog posts) is wget. wget also fetches files from many protocols. The difference between curl and wget is that curl by default dumps the data to stdout where wget by default writes the file to the remote filename.

Essentially the following do the exact same thing:

 owen@linux-blog-:~$ lynx http://www.thelinuxblog.com -source > lynx-source.html
owen@linux-blog-:~$ links http://www.thelinuxblog.com -source > links-source.html
owen@linux-blog-:~$ curl http://www.thelinuxblog.com > curl.html

Apart from the shell browser interface links and lynx also have some differences that may not be visible to the end user.
Both lynx and links re-format the code received into a format that they understand better. The method of doing this is -dump. They both format it differently so which ever one is easier for you to parse I would recommend using. Take the following:

 owen@linux-blog-:~$ lynx -dump http://www.thelinuxblog.com > lynx-dump.html
owen@linux-blog-:~$ links -dump http://www.thelinuxblog.com > links-dump.html
owen@linux-blog-:~$ md5sum links-dump.html
8685d0beeb68c3b25fba20ca4209645e  links-dump.html
owen@linux-blog-:~$ md5sum lynx-dump.html
beb4f9042a236c6b773a1cd8027fe252  lynx-dump.html

The md5 indicates that the dumped HTML is different.

wget does the same thing (as curl, links -source and lynx -source) but will create the local file with the the remote filename like so:

 owen@linux-blog-:~$ wget http://www.thelinuxblog.com
--17:51:21--  http://www.thelinuxblog.com/
=> `index.html'
Resolving www.thelinuxblog.com... 72.9.151.51
Connecting to www.thelinuxblog.com|72.9.151.51|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html][  <=>                                ] 41,045       162.48K/s
 
17:51:22 (162.33 KB/s) - `index.html' saved [41045]
 
owen@linux-blog-:~$ ls
index.html

Here is the result md5sum on all of the files in the directory:

 owen@linux-blog-:~$ for i in $(ls); do md5sum $i; done;
a791a9baff48dfda6eb85e0e6200f80f  curl.html
a791a9baff48dfda6eb85e0e6200f80f  index.html
8685d0beeb68c3b25fba20ca4209645e  links-dump.html
a791a9baff48dfda6eb85e0e6200f80f  links-source.html
beb4f9042a236c6b773a1cd8027fe252  lynx-dump.html
a791a9baff48dfda6eb85e0e6200f80f  lynx-source.html

Note: index.php is wget’s output.
Where ever the sum matches, the output is the same.

What do I like to use?
Although all of the methods (excluding dump) produce the same results I personally like to use curl because I am familiar with the syntax. It handles variables, cookies, encryption and compression extremely well. The user agent is easy to change. The last winning point for me is that it has a PHP extension which is nice to avoid using system calls to the other methods.