Linux Blog

Reattach Screen Script

Filed under: Shell Script Sundays — TheLinuxBlog.com at 2:02 pm on Sunday, April 12, 2009

A friend of mine who happens to be an avid screen user sent me this snippet below:

### Reattach to a screen if one exists ###
if [[ $TERM != 'screen' ]] ; then
if [[ `screen -list | grep -v "No" | awk '$2 { print }' | wc -l` == 0 ]] ; then
screen
else
screen -dr
fi
fi

What this handy snippet does is looks for a screen session, if it finds one it detaches the running screen, and reattaches it(-dr) if it isn’t lucky enough to find one, then it just starts a session up for you. Its rather handy to put in your .bashrc file to auto launch a screen session. The only thing I have modified for my use is replacing -dr for -x to enable me to reattach the screen without detaching the session I may have had open on another terminal. It works pretty well, although when you open a new “screen” CTRL-a + c, the tab doesn’t show up on the other sessions until you change to it, or cycle through them. It isn’t a big deal and could even be a local configuration issue. Anyway, enjoy this snippet and as always let me know if you found it useful.

Fedora 9 Thunderbird Update Fix

Filed under: Linux Software — TheLinuxBlog.com at 12:01 am on Wednesday, January 14, 2009

Fedora 9 Thunderbird
While updating a Fedora 9 installation I ran across an error. The error was with the Mozilla Thunderbird package that I use on a regular basis.
The error looked like this:

Running Transaction
Updating : thunderbird 1/2
Error unpacking rpm package thunderbird-2.0.0.19-1.fc9.i386
error: unpacking of archive failed on file /usr/lib/thunderbird-2.0.0.19/dictionaries: cpio: rename

Obviously any fix that I implemented couldn’t loose my mail. The problem was with the dictionaries more specifically the /usr/lib/thunderbird-2.0.0.19/dictionaries file. The error is not very specific but lets us know its having trouble unpacking the archive and ends with cpio: rename. So here is what I did to solve the problem:

cd /usr/lib/thunderbird-2.0.0.19/
sudo mv dictionaries dictionaries-old

Thunderbird data is stored in ~/.thunderbird it is advisable you make a backup of your mail if it is that important to you. I didn’t since this directory is a library directory and all of my mail can be downloaded again with imap. If you use pop you may want to consider doing a backup. After doing this it fixed Thunderbird and I’m all up to date. Horray!

Let me know if it worked for you and I’ll let you all know if there are any problems.

Use VNC through SSH

Filed under: Quick Linux Tutorials — TheLinuxBlog.com at 11:33 am on Thursday, November 20, 2008

Here is another quick tutorial;

Some times its nice to tunnel through SSH. Perhaps you have SSH running but the firewall does not allow anything but SSH in. You can tunnel VNC (or any other service) through SSH by doing the following:

On the machine local to you establish an SSH connection to the remote machine with “Local (-L)”  port forwarding. This may seem confusing and often confuses me, where <-p PORT> is optional

 ssh -L 5901:localhost:5900 username@HOST <-p PORT>

Once I have the connection established I can now use vncviewer to connect to my local host with the port specified

vncviewer  localhost:5901

Thats all there is to it, have fun!

Using a custom Tomcat on Fedora

Filed under: Quick Linux Tutorials — TheLinuxBlog.com at 10:22 am on Wednesday, November 12, 2008

So, I hear you need to use Tomcat on Fedora eh? Not happy with the available Tomcat version from the repository? Well my friends you can add a custom Tomcat to Fedora and have it run as a service.

This post is somewhat related to: my Adding a service on Fedora post except this one is more specific to Tomcat. If you’d like more information on adding services to Fedora that is the place to look.

Here is the script that I have been using: (Read on …)

Using Subversion with SSH & Custom Ports

Filed under: Linux Software,Quick Linux Tutorials — TheLinuxBlog.com at 9:09 am on Monday, September 15, 2008

Lets say you use subversion on your home PC to keep track of projects and you want to checkout or export your project from a remote location. Here’s the catch, sshd is running on a custom port or forwarded from another. For some reason the command line SVN client does not support a port parameter when using the defacto svn+ssh://

svn co svn+ssh://thelinuxblog.com/owen/svn/project1/trunk project1
ssh: connect to host thelinuxblog.com port 22: Connection refused

Well we know why the error above happens its because I happen to run SSH on port 1337. The following work around requires root privileges, and may mess with your system a bit but if you really need to check something out, then it will work.

As root, login and stop SSH if you run it. With SSH Forward port 22 with a local SSH forwarding connection to the remote host.

[owen@thelinuxblog.com]$ sudo su -
[root@thelinuxblog.com]$ /sbin/service sshd stop
[root@thelinuxblog.com]$ ssh -p 1337 owen@thelinuxblog.com-L 22:<internal ip>:1337

Once this is done, your localhost:22 now forwards to your remote host. With another session (on your local machine) you can verify the connection by using ssh localhost. You will probably get warnings about the hosts identity being changed, or not verifying it, but you can ignore then. Once you’ve tested it, just use SVN as normal. When finished, remember to logout of the SSH session, and start SSH back up again if you run it.

VMWare: “Unable to build the vmnet module”

Filed under: General Linux,Linux Software — TheLinuxBlog.com at 10:49 am on Monday, July 21, 2008

If you run into the following problem:

VMware Server is installed, but it has not been (correctly) configured
for the running kernel. To (re-)configure it, invoke the
following command: /usr/local/bin/vmware-config.pl.

and then try to issue the vmware-config.pl command and get something similar to the following:

/tmp/vmware-config1/vmnet-only/bridge.c: In function ‘VNetBridgeUp’:
/tmp/vmware-config1/vmnet-only/bridge.c:949: error: implicit declaration of function ‘sock_valbool_flag’
make[2]: *** [/tmp/vmware-config1/vmnet-only/bridge.o] Error 1
make[1]: *** [_module_/tmp/vmware-config1/vmnet-only] Error 2
make[1]: Leaving directory `/usr/src/kernels/2.6.25.10-47.fc8-i686′
make: *** [vmnet.ko] Error 2
make: Leaving directory `/tmp/vmware-config1/vmnet-only’
Unable to build the vmnet module.

Then try to use the VMWare any patch from: http://groups.google.com/group/vmkernelnewbies/files
I had used the patch before to get my VMWare Server up and running but did not realize that you had to use the patch after kernel upgrade or your VMWare server will no longer work.

Who knew?

A few things you may not know about YUM

Filed under: Linux Software — TheLinuxBlog.com at 3:14 pm on Tuesday, July 15, 2008

Yum stands for “YellowDog Updater Modified”

Yum is a standard way to update multiple distributions.

The openSUSE build repository uses the yum updating system

Yum was written in Python.

If you install the yum-utils package you can download yum rpm packages by running:

yumdownloader –source yum

There are graphical front ends to YUM

Yum is maintained by the Linux@Duke project, thats right the basketball team you love to hate: The Blue Devils.

Wakoopa For Linux

Filed under: General Linux,Linux Software — TheLinuxBlog.com at 12:01 am on Monday, March 31, 2008

I stumbled across Jakes blog post over at: http://blogs.howtogeek.com/jatecblog/posts/software-tracker-for-linux. Until this point I had never heard of the Wakoopa service. It seems like a really good idea. It is sort of the Alexa for software applications. Naturally I left a comment showing interest in an open source Wakoopa and shortly after received an e-mail from Jake.

Here it is:

Hello Owen, 

First I'd like to clarify that I don't actually have a need for the
application tracker... it would be purely for fun. That said, I would love if
you would be willing to create this. Here is the idea I have envisioned in
more detail but do not have the skills to create:

1) The process list is purged every so often to generate a log file.
2) The log file is periodically sent to a server. It is cleared after each
time it is uploaded.
3) The server then has an application which goes through and sorts out process
names and so forth and presents them as user reader data (much like Wakoopa) 

I think that this would be the easiest way, but I'd love to hear your
suggestions. If you were to make this I think it would be used and loved by many, as well as being useful.

Now that he has broken it down like that it seems like it would be pretty easy to implement. The only thing that I can see being a little bit complicated is determining what processes are running and how long they have been running for. I hopefully have a short shell script up for next Sundays column and have some sort of prototype. There should be nothing new in this script that I haven’t covered before on this blog, except possibly the sort command. Other commands I plan to know I will probably use are ps or top, cat and echo. There will probably be lots of loops and conditional if’s. The good thing about this idea is that if I write a shell script to do this some one will be able to translate it into another language. The real part where I would like to spend the majority of my time would be in the web interface. I expect that this will be written in PHP but I am unsure of the database technology that will be used since the recent happenings with MySQL.

So when this open source Wakoopa prototype is finished how many people do you think will use this service? Would you use it? What do you think an acceptable update time is? Any one have any other questions / input?

Bringing The Internet Up After Failure

Filed under: Shell Script Sundays — TheLinuxBlog.com at 9:58 pm on Sunday, September 9, 2007

This Shell Script Sunday is a short one but don’t let that fool you to the power of the shell. This script I wrote earlier in the week due to power spikes at the office. All of our equipment would stay powered on due to UPS’s but unfortunately something with the ISP was not staying on. Once the brownout occurred our router box would still have an IP and seem to be working but it wouldn’t. We had our suspicions about what piece of equipment it was but had no power to fix it. I would renew the IP from the ISP bring the public interface down by using eth0 down and then eth0 up but this was not successful. To fix it from the router I had to actually reset the network. This worked, but we have some services running at the office that I like to access from home. So to fix the problem I wrote a one liner to reset the network if the connection goes down.

ping -c 1 OurISP.com 2> /dev/null > /dev/null && echo > /dev/null || sudo /etc/rc.d/network restart

The techniques in this script are covered in Shell Scripting 101. All this does is ping OurISP.com one time and output the error & standard output to /dev/null. If the ping was successful it does nothing and if the ping failed then it restarts the network. To get it to repeat at an interval I just set it up as a cron job. This did the trick and I now do not have to worry about brownouts.

Ubuntu & Gentoo Servers compromised

Filed under: General Linux — TheLinuxBlog.com at 11:00 pm on Wednesday, August 15, 2007

The case of the Ubuntu servers being breached [wiki.ubuntu.com]
Missing security updates and system administrators not running updates on servers is a problem. I don’t know why they didn’t do any updates past Breezy. They suggest that it was because of problems with network cards and later kernels but I don’t get it. Since when do software updates for an operating system have anything to do with what kernel is running? If there is a problem with hardware support for the network card you have two choices. The first is to fix the driver yourself or pay some one to do it. The second is to replace the network card to a better supported device. Both situations could be costly but it would get the problem fixed and five of the servers wouldn’t have been taken down at the same time.
If the kernels were configured correctly, the boxes probably wouldn’t of even had to have been rebooted.
Running FTP instead of a more secure version is not so bad unless they were running accounts with higher privileges than guest or using system accounts. In which case thats just stupid.

The Gentoo Situation [bugs.gentoo.org]
Apparently there is a problem in the packages.gentoo.org script. The bugzilla article goes into deeper explanation but basically there is some pretty unsafe code which could have allowed any one to run any command. I understand that the code is old but it probably should have been audited at some point. The problem would have stuck out like a soar thumb if looked at by a python coder and they probably would have fixed it, or at least suggested a fix. The problem was found on Tuesday the 7th. All of the infra- (I assume they mean infrastructure?) guys were at a conference last week so they couldn’t work on it. It still seems that if they were at the conference until midnight on the 12th they would still have been able to put up a coming back soon placeholder on the packages site by now. Hey, if they put some pay per click ads up there maybe they will get some additional funds during the down time. I would like to see what products would be pushed thru the advertising on that one. I believe that they could have reduced the downtime by releasing the code for the packages.gentoo.org site as open source or by asking for help from developers to review and upgrade the code as needed.

Its not strange for web servers get hacked. They get hacked all the time but who’s fault is it in the open source community? I really think that there is a problem in the community when it comes to situations like this but the blame can’t be placed on any one person. I would offer any assistance I could into getting these situations resolved but its not as easy as that. There has to be a certain level of trust for those working within a project. If they gave out keys to their servers to anyone the servers probably would have been compromised a long time ago. I hope that the affected sites can pull them selfs together and get back up and running as normal. It seems that Ubuntu did not have complete down time, but the Gentoo site is still down and there is no indication of when it will be back up.