Secure browsing on a open wifi network

Darpa LogoMost of the traffic on the Internet is sent in the clear – fact. Back in the day the Internet was a trusting place. You’d think that given it was developed as a Darpa (Defense Advanced Research Projects Agency) project that there would be a lot of thought put into security. Well there was and the thought process went something like this, “If you have access to the Internet you must be a good guy or you wouldn’t have access to the Internet.” Not a lot of point wasting expensive processing cycles on encryption when you trust everyone on the network anyway.

So a whole fleet of protocols sprung up where everything is sent in the clear. Most didn’t use usernames or passwords but those that did happily send those in clear text over the wire. Protocols like http, ftp, smtp, ntp, dns, snmp, rip, etc etc all are in the clear. As time went on some of these had encryption strapped on but for the most part if you take a packet analyzer like wireshark and have a look at the traffic on your home network (which you have written permission to analyse/sniff) then you’ll see a lot of clear text going across the wire.

Of course this is only a problem if you have access to the routers that your data passes through. Normally it’s not an issue because it’s difficult to get physical access to this equipment in order to sniff the traffic. This all changes when we start talking about wireless networks as that data is now transmitted through the air and those signals can travel a long way. Anyone within range of your signal can see your data pass by in the clear. To combat this wireless networks have encryption like WEP/WPA that present a barrier to eavesdroppers.

The recent popularity of Internet enabled mobile devices has fueled an explosion of unencrypted WiFi public access points. Whether it’s installed by the IT team in a corporate office or a mom and pop coffee shop owner, all these network’s have one thing in common. They are the red light districts of Internet access. While the services on offer may be tempting, viruses, embarrassment, and theft await the unwary.

You simply cannot trust these networks. Not just because that the guy next to you is running backtrack but also because the access point may be already compromised at installation or by someone running a man in the middle attack. Once the access point is compromised applications that run over https are also potentially vulnerable. Even if the network is clean, draconian laws in some jurisdictions require all WiFi providers to maintain a log of their customers behavior.
sudo netstat -anp | grep 3128

One easy to setup solution is to tunnel your traffic over ssh to your own squid proxy. I already mentioned this back in HPR episode 227 but I wanted to go into a bit more detail here.

This solution takes advantage of the fact that most browsers support accessing the Internet via a proxy server. The idea of a proxy server is to allow controlled access out of an internal network to the Internet. So instead of trying to contact the Internet site directly, they will send the request to the proxy server so that it can relay the request to the Internet. It will also take care of passing back the traffic to the browser. Normally you connect to a proxy on a local area network but we don’t trust the local network any more. So we are going to setup the proxy server on a network that we do trust, and use that as the exit point. The traffic between the browser and the proxy server will be encrypted for us by a secure shell tunnel.

The proxy server will be setup to only allow connections from it’s loopback address – or to put it simply – only from processes running on the server itself. The web browser will send requests to it’s own loopback address on the laptop. The ssh client running on the same laptop will encrypt the packets and send them over the hostile network to the ssh server process listening on the Internet address on your home server. The sshd process will unencrypt these packets and forward them to the servers local address where the squid application is listening. The loopback address is in a special range of network addresses that allows applications running on the same server to communicate with each other over IP. In IPv4 it is actually a class A address range of 127.0.0.0/8 and sudo netstat -anp | grep 3128
is defined by rfc3330. Although there are 16,777,214 addresses reserved for that network, in most cases only one address 127.0.0.1 is setup. In IPv6 only one address is defined.

You can check the status of your loopback address by typing ifconfig lo

user@pc:~$ ifconfig lo
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:88 errors:0 dropped:0 overruns:0 frame:0
          TX packets:88 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:5360 (5.3 KB)  TX bytes:5360 (5.3 KB)

Setting up the proxy server.

If you’ve heard HPR episode 227 then you are already running squid but for those that didn’t make it through, here’s a quick summary:

sudo apt-get install squid

That’s it. You now have a the squid proxy server running and listening for traffic on port 3128. In the default configuration of Debian and Ubuntu needed no modification as it only allows access out to the Internet if the connections originate from the loopback address.

You can use netstat to find out if it actually is listening on port 3128.

user@server:~$ sudo netstat -anp | grep 3128
[sudo] password for user:
tcp        0      0 127.0.0.1:3128          0.0.0.0:*               LISTEN      1948/(squid)
user@pc:~$ man netstat
...
DESCRIPTION
       Netstat  prints  information about the Linux networking subsystem. ...

...
   -a, --all
       Show both listening and non-listening sockets.  With the --interfaces
       option, show interfaces that are not up
...
   --numeric , -n
       Show numerical addresses instead of trying to determine symbolic host,
       port or user names.
...
   -p, --program
       Show the PID and name of the program to which each socket belongs.
...

This tells us that the squid process on pid 1948 is listening for TCP connections on the loopback address 172.0.0.1 and port number 3128. We can even connect to the port and see what happens. Open two sessions to your server. In one we will monitor the squid log file using the tail command and in the other we’ll connect to port 3128 using the telnet program.

On the first screen:

user@server:~$ sudo tail -f /var/log/squid/access.log

Then on the other screen:

user@server:~$ telnet localhost 3128
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Then type anything here and press enter. You will get the standard squid html 400 Bad Request response back.

HTTP/1.0 400 Bad Request
Server: squid/2.7.STABLE6
Date: Mon, 05 Jul 2010 19:18:56 GMT
Content-Type: text/html
Content-Length: 1210
X-Squid-Error: ERR_INVALID_REQ 0
X-Cache: MISS from localhost
X-Cache-Lookup: NONE from localhost:3128
Via: 1.0 localhost:3128 (squid/2.7.STABLE6)
Connection: close

< !DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">

ERROR: The requested URL could not be retrieved


ERROR

The requested URL could not be retrieved


While trying to process the request: !!!!!!!!!! type anything here and press enter !!!!!!!!!!

The following error was encountered:

  • Invalid Request

Some aspect of the HTTP Request is invalid. Possible problems:

  • Missing or unknown request method
  • Missing URL
  • Missing HTTP Identifier (HTTP/1.0)
  • Request is too large
  • Content-Length missing for POST or PUT requests
  • Illegal character in hostname; underscores are not allowed

Your cache administrator is webmaster.


Generated Mon, 05 Jul 2010 19:18:56 GMT by localhost (squid/2.7.STABLE6)

Connection closed by foreign host. user@server:~$

On the other screen you will see the error

1278358230.552      0 127.0.0.1 TCP_DENIED/400 1503 NONE NONE:// - NONE/- text/html

Configure ssh server

If you need to install ssh you can do so with the following command

sudo apt-get install openssh-server

You will need to modify your firewall/modem to allow incoming ssh connections to your ssh server. Now we come across a minor problem. In an attempt to restrict access to web surfing, many of these hot-spots restrict outbound traffic to just http (port 80) and https (port 443) traffic. So your ssh client may not be able to connect to the ssh server because the default ssh port 22 may be blocked. The problem is easily fixed by telling the ssh client to use a port number that isn’t blocked by the hot-spot and that you’re not already using on your home router. You just need to pick a suitable port.

By the way you can look up common port numbers in the file /etc/protocols file.

Using port 80 is probably the best, unless you are running your own webserver at home. In my case Mr. Yeats is running a mirror of nl.lottalinuxlinks.org, so my next option is to use port 443 https. It’s unlikely that hot-spots block that port or the other customers will complain that they can’t login to facebook.

You can approach this in two ways.

  • Setup a NAT port translation on the firewall. In that setup Internet clients connect to your external IP address on port 443 and your router reroutes that to port 22 of your internal server.
  • Allow port 443 to pass your firewall and hit the server directly. In which case we need to have the sshd server itself listen on port 443

The procedure for setting up a port forwarding session on your home device will vary. This is the settings on my Speedtouch ADSL modem.
ADSL Router settings that redirects port 443 to port 22

The other option is to setup your ssh server to also listen on port 443. Here again you can use netstat to find out if port 443 is available. Run the following command and if nothing is returned then the port is free.

user@pc:~$ sudo netstat -anp | grep 443
user@pc:~$ man netstat
...
DESCRIPTION
       Netstat  prints  information about the Linux networking subsystem. ...

...
   -a, --all
       Show both listening and non-listening sockets.  With the --interfaces option, show interfaces that are not up
...
   --numeric , -n
       Show numerical addresses instead of trying to determine symbolic host, port or user names.
...
   -p, --program
       Show the PID and name of the program to which each socket belongs.
...

Assuming that the port is available then you can edit the file /etc/ssh/ssh_config and add the line Port 443 under the existing Port 22 entry. After restarting the ssh daemon your home server should also be listening on port 22 as well as port 443.

user@pc:~$ sudo grep Port /etc/ssh/sshd_config
Port 22
user@pc:~$ sudo netstat -anp | grep sshd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      917/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      917/sshd
user@pc:~$ sudo vi /etc/ssh/sshd_config
user@pc:~$ sudo /etc/init.d/ssh restart
 * Restarting OpenBSD Secure Shell server sshd                                                                                          [ OK ]
user@pc:~$ sudo grep Port /etc/ssh/sshd_config
Port 22
Port 443
user@pc:~$ sudo netstat -anp | grep sshd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1790/sshd
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      1790/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      1790/sshd
tcp6       0      0 :::443                  :::*                    LISTEN      1790/sshd
user@pc:~$

Once the external port is setup to listen on 443 you should be able to connect to it from outside your network. You will only be able to do this from inside your network if you have set sshd to listen on port 443. If you implement port forwarding you won’t be able to connect from the internal network because your traffic will never be routed to the external port of your firewall/modem.

If you haven’t a permanent IP address you’ll need to setup a dynamic dns service so you will know which external address is configured on your router.

You can now check that you can connect from an external location to your server on port 443 by running the following command:

user@pc:~$ man ssh
...
     -p port
             Port to connect to on the remote host.  This can be specified on a per-host basis in the configuration file.
...
user@pc:~$ ssh -p 443 user@example.com

Assuming you were able to login then you can disconnect so that we can connect again but with portforwarding enabled this time. As we know squid listens on port 3128 so now you need to open a ssh session that will tunnel the traffic sent to port 3128 on the localhost ip address on the client to localhost port 3128 on the server at the other end. You can do this by running the following command:

user@pc:~$ man ssh
...
     -L [bind_address:]port:host:hostport
             Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side.
...
user@pc:~$ ssh -L 3128:localhost:3128 -p 443 user@example.com
...

You should now have a ssh session open to your home server and everything will look very much as before. You can now check that ssh is now forwarding traffic from your client to the server. This means that the ssh client is listening on port 3128 on your laptop. You can check this using netstat.

user@pc:~$ sudo netstat -anp | grep 3128
[sudo] password for user:
tcp        0      0 127.0.0.1:3128          0.0.0.0:*               LISTEN      1872/ssh
tcp6       0      0 ::1:3128                :::*                    LISTEN      1872/ssh

This shows that the ssh (client) process on pid 1872 is listening for TCP connections on the loopback address 172.0.0.1 and port number 3128.

Now all you have to do is set your browser to use a proxy of localhost 3128

Posted in General | 2 Comments

HPR: Shot of hack:BSOD on Kubuntu

HPR LogoSome of the guys complained that my 22″ analogue clock screen-saver was a bit boring. So I thought that I would pull the fake blue screen of death screen-saver one on them.

The screen saver …

BSOD – shows fake fatal “screens of death” from a variety of computer systems, including Microsoft Windows Blue Screen of Death, a Linux kernel panic, a Darwin crash, an Amiga “Guru Meditation” error, a sad Mac and many others.

I googled and found that it was in the xscreensaver-data-extra package which I had installed and I even had a configuration file under /usr/share/xscreensaver/config/bsod.xml. However I could not find the BSOD screen-saver listed on my KDE4 system settings screen saver panel.

So why isn’t xscreensaver working ? Well it turns out

… you’re not running xscreensaver at all, you’re running “kscreensaver”. You should stop.

Instead of using the usual xscreensaver mechanisms, the KDE folks have chosen to roll their own screen saver wrapper that is inferior to the xscreensaver-demo way of doing things in any number of ways.

So if you need a xscreensaver, then you need to install xscreensaver. Which means that you need to disable KDE screensaver and replace the existing “lock keyboard” application. They have a faq on how to do this.

The first step is to install xscreensaver and recommended applications.

sudo apt-get install xscreensaver xli xloadimage xfishtank qcam 

Next you need to edit .kde/Autostart/xscreensaver.desktop and add the following lines:

[Desktop Entry]
Exec=xscreensaver
Name=XScreenSaver
Type=Application
X-KDE-StartupNotify=false

This will be sufficient to start the the screen saver but you also need to disable kscreensaver by un-checking “Start Automatically” in system settings.

If you like the idea of locking the screen with the padlock icon or by pressing Ctrl+Alt+L, then you also need to change the screen locker program. I found it at /usr/lib/kde4/libexec/kscreenlocker and replaced it with a script that contained the following:

#!/bin/sh
xscreensaver-command -lock

I then made it executable using the command:

chmod +x /usr/lib/kde4/libexec/kscreenlocker

Once I had that done I recorded a short Hacker Public Radio episode.

And you should too.

Posted in General | 1 Comment

HPR: Shot of Hack – Changing the time offset of a series of photos.

The problem: You have a series of photos where the time is offset from the correct time but is still correct in relation to each other.

The Hacker Public Radio Logo of a old style microphone with the letters HPR

Transcript(ish) of Hacker Public Radio Episode 546

 

Here are a few of the times that I’ve needed to do this:

  • Changing the battery on my camera switched to a default date.
  • I wanted to synchronize the time on my camera to a GPS track so the photos matched the timestamped coordinates.
  • At a family event where images from different cameras were added together.

You can do edit the timestamp using a GUI and many photo manipulation applications like the GIMP support metadata editing.

For example on KDE -> gwenview -> plugins -> images -> metadata -> edit EXIF


The problem is that this gets tiresome after a few images, and anyway the times are correct in relation to each other – I just need to add or subtract a time correction to them en masse.

The answer: exiv2 – Image metadata manipulation tool. It is a program to read and write Exif, IPTC and XMP image metadata and image comments.

user@pc:~$ exiv2 *.jpg
File name       : test.jpg
File size       : 323818 Bytes
MIME type       : image/jpeg
Image size      : 1280 x 960
Camera make     : FUJIFILM
Camera model    : MX-1200
Image timestamp : 2008:12:07 15:12:59
Image number    :
Exposure time   : 1/64 s
Aperture        : F4.5
Exposure bias   : 0 EV
Flash           : Fired
Flash bias      :
Focal length    : 5.8 mm
Subject distance:
ISO speed       : 160
Exposure mode   : Auto
Metering mode   : Multi-segment
Macro mode      :
Image quality   :
Exif Resolution : 1280 x 960
White balance   :
Thumbnail       : image/jpeg, 5950 Bytes
Copyright       :
Exif comment    :

The trick is to pick a image where you can that figure out what the time was and work out the time offset. In my case I needed to adjust the date forward by six months and four days while changing the time back by seven hours. I used the command exiv2 -O 6 -D 4 -a -7 *.jpg

-a time
    Time adjustment in the format [-]HH[:MM[:SS]].
    This option is only used with the 'adjust' action. Examples:
        1 adds one hour,
        1:01 adds one hour and one minute,
        -0:00:30 subtracts 30 seconds.
-Y yrs
    Time adjustment by a positive or negative number of years, for the 'adjust' action.
-O mon
    Time adjustment by a positive or negative number of months, for the 'adjust' action.
-D day
    Time adjustment by a positive or negative number of days, for the 'adjust' action.

When we run this we can see that the timestamp has now changed.

user@pc:~$ exiv2 *.jpg | grep timestamp
Image timestamp : 2009:06:11 08:12:59

That’s it. Remember this is the end of the conversation – to give feedback you can either record a show for the HPR network and email it to admin@hackerpublicradio.org or write it on a post-it note and attach it to the windscreen of Dave Yates’s car as he’s recording his next show.

More Info
http://www.hackerpublicradio.org
https://kenfallon.com/?cat=12

Posted in Podcasts | Tagged , , | 1 Comment

HPR: A private data cloud

The Hacker Public Radio Logo of a old style microphone with the letters HPR

Transcript(ish) of Hacker Public Radio Episode 544

Over the last two years I have stopped using analogue cameras for my photos and videos. As a result I also don’t print out photos any more when the roll is full. This goes some way to explaining why my mother has no recent pictures of the kids. Living in a digital world comes a realization that we need to take a lot more care when it comes to making backups.

In the past if my pc’s hard disk blew up virtually everything of importance could be recreated. That simply isn’t the case any more when the only copy of your cherished photos and videos are now on your computer. Add to that the fact that in an effort to keep costs decreasing and capacity increasing hardisks are becoming more and more unreliable (pdf).

A major hurdle to efficient backups is that the capacity of data storage is exceeding what can be practically transferred to ‘traditional’ backup media. I now have a collection of media reaching 250G where backing up to DVD is not feasible any more. Even if your collection is smaller be aware that sd cards, USB Sticks, DVD’s or even tapes also degrade over time.

And yet hard disks are cheap. You can get a 1.5 TB disk from amazon.com for $95 or a 1T for €70 from mycom.nl. So the solution would appear to be a juggling act where you keep moving your data on a pool of disks and replace the drives as they fail. Probably the easiest solution is to get a hand holding drobo or a sub 100$/€ low power NAS solutions. If you fancy doing it yourself Linux has had support for fast software mirroring/raid or years.

Problem solved ….

NASA Image of the Earth taken by Apollo 17 over green binary data

…well not quite. What if your nas is stolen or accidentally destroyed ?

You need to consider a backup strategy that also mirrors your data to another geographic location. There are solutions out there to store data in the cloud (ubuntu one, dropbox, etc.) The problem is that these services are fine for ‘small’ amounts of data but get very expensive very quickly for the amount of data we’re talking about.

The solution, well my solution, is to mirror data across the Internet using rsync over ssh to my brothers NAS and he mirrors his data to mine. This involves a degree of trust as you are now putting your data into someone else’s care. In my case it’s not an issue but if you are worried about this you can take the additional step of shipping them an entire pc. This might be a low power device that has enough of an OS that can get onto the Internet. From there you can ssh in to mount an encrypted partition. When hosting content for someone else you should consider the security implications of another user having access to your network from behind your firewall. You would also need to be confident that they are not hosting anything or doing anything that would lead you to get in trouble with the law.

Once you are happy to go ahead what you need to do is to start storing all your important data to the NAS in the first place. You will want to have all your PC’s and other devices back up to it. It’s probably a good idea to mount the nas on the client PC’s directly using nfs, samba, sshfs etc so that data is saved there directly. If you and your peering partner have enough space you can start replicating immediately or you may need to purchase an additional disk for your remote peer to install. I suggest that you do the initial drop locally and transfer the data by sneaker net, which will be faster and avoid issues with the ISP’s.

It’s best to mirror between drives that can support the same file attributes. For instance copying files from ext3 to fat32 will result in a loss of user and group permissions.

When testing I usually create a test directory on the source and destination that have some files and directories that are identical, different and modified so that I can confirm rsync operations.

To synchronize between locally mounted disks you can use the command:

rsync -vva --dry-run --delete --force /data/AUTOSYNC/ /media/disk/

/data/AUTOSYNC/ is the source and /media/disk/ is the destination. The --dry-run option will go through the motions of copying the data but not actually do anything and this is very important when you start so you know what’s going on. The -a option is the archive option and is equivalent to -rlptgoD. Here’s a quick run through the rsync options

-n, --dry-run
    perform a trial run that doesn't make any changes
-v, --verbose
    increases the amount of information you are given during the transfer.
-r, --recursive
    copy directories recursively.
-l, --links
    recreate the symlink on the destination.
-p, --perms
    set the destination permissions to be the same as the source.
-t, --times
    set the destination modification times to be the same as the source.
-g, --group
    set the group of the destination file to be the same as the source.
-o, --owner
    set the owner of the destination file to be the same as the source.
-D
    transfer character, block device files, named sockets and fifos.
--delete
    delete extraneous files from dest dirs
--force
    force deletion of dirs even if not empty

For a complete list see the rsync web page.

Warning: Be careful when you are transferring data that you don’t accidentally delete or overwrite anything.

Once you are happy that the rsync is doing what you expect, you can drop the --dry-run and wait for the transfer to complete.

The next step might be to ship the disk off to the remote location and then setup the rsync over ssh. However I prefer to have an additional testing step where I rsync over ssh to a pc in the home. This allows me to work out all the rsync ssh issues before the disk is shipped. The steps are identical so you can repeat this step once the disk has been shipped and installed at the remote end.

OpenBSD and OpenSSH mascot Puffy

OpenSSH

On your NAS server you will need to generate a new ssh public and private key pair that has no password associated. The reason for this is that you want the synchronization to occur automatically so you will need to be able to access the remote system securely without having to enter a password. There are security concerns with this approach so again proceed with caution. You may wish to create a separate user for this but I’ll leave that up to you. Now you can add the public key to the remote users .ssh/authorized_keys file. Jeremy Mates site has more information on this.

To confirm the keys are working you can try to open a ssh session using the key you just setup.

ssh -i /home/user/.ssh/rsync-key user@example.com

You may need to type yes to add the keys to the .ssh/known_hosts file, so it makes sense to run that command as the user that will be doing the rsyncing. All going well you should now be logged into the other system.

Once you are happy that secure shell is working all you now need to do is add the option to tell rsync to use secure shell as the transport.

rsync -va --delete --force -e "ssh -i /home/user/.ssh/rsync-key" /data/AUTOSYNC/ user@example.com:AUTOSYNC/

All going well there should be no updates but you may want to try adding, deleting and modifying files on both ends to make sure the process is working correctly. When you are happy you can ship the disk to the other side. The only requirement on the other network is that ssh is allowed through the firewall to your server and that you know the public IP address of the remote network. For those poor people without a fixed IP address, most systems provide a means to register a dynamic dns entry. Once you can ssh to your server you should also be able to rsync to it like we did before.

Of course the whole point is that the synchronization should be seamless so you want your rsync to be running constantly. The easiest way to do this is just to start a screen session and then run the command above in a simple loop. This has the advantage of allowing you to get going quickly but is not very resistant to reboots. I created a simple bash script to do the synchronization.

user@pc:~$ cat /usr/local/bin/autosync
#!/bin/bash
while true
  do
  date
  rsync -va --delete --force -e "ssh -i /home/user/.ssh/rsync-key" /data/AUTOSYNC/ user@example.com:AUTOSYNC/
  date
  sleep 3600
done
user@pc:~$ chmod +x /usr/local/bin/autosync

We wrap the rsync command into a infinite while loop that outputs a time stamp before and after it has run. I then pause the script for an hour after each run so that I’m not swamping either side. After making the file executable you can add it to the crontab of the user doing the rsync. See my episode on Cron on how to do that. This is a listing of the crontab file that I use.

user@pc:~$ crontab -l
MAILTO=""
0 1 * * * timeout 54000 /usr/local/bin/autosync > /tmp/autosync.log  2>&1

There are a few additions to what you might expect here. Were I to run the script directly from cron then it would spawn a new copy of the autosync script at one o’clock every morning. The script itself would never terminate so over time there would be many copies of the script running simultaneously. This isn’t an issue here as I am actually calling the timeout command first and it’s the one that actually calls the autosync script. The reason for this is that my brother doesn’t want me rsyncing in the evening when he is usually online. I could have throttled the amount of bandwidth I used as well but he said not to bother.

--bwlimit=KBPS
    This option allows you to specify a maximum transfer rate in kilobytes per second.

As the timeout command runs in it’s own process it’s output is not redirected to the logfile. In order to stop the cron owners email account getting a mail every time the timeout occurs I added a blank MAILTO="" line at the start of the crontab file. Thanks to UnixCraft for that tip.

Well that’s it. Once anyone on your network saves a file it will be stored on their local NAS and over time it will be automatically replicated to the remote network. There’s nothing stopping you replicating to other sites as well.

An image from screencasters.heathenx.org episode 94

screencasters.heathenx.org

This months recommended podcast is screencasters at heathenx.org.
From the about page:

The goal of Screencasters.heathenx.org is to provide a means, through a simple website, of allowing new users in the Inkscape community to watch some basic and intermediate tutorials by the authors of this website.

heathenx and Richard Querin have produced a series of shows that put a lot of ‘professional tutorials’ to shame. Their instructions are clear and simple and have given me a good grounding into a complex and powerfull graphics program despite the fact I have as yet not even installed inkskape. They even have mini tutorials on how to make your way around the interface and menus.

After watching the entire series I find myself looking at posters and advertisements knowing how that effect could be achieved in inkskape. If you are interested in graphics you owe it to yourself to check out the series. If you know someone using photoshop then burn these onto DVD and install inkskape for them. Even if you have no creative bone in your body this series would allow you to bluff your way through graphic design.

Excellent work.

Posted in General, Podcasts | Tagged , , , , | Leave a comment

How to install Checkpoint ssl extender vpn (snx) under Debian/Kubuntu

I have released an update to this blog post: See CheckPoint SNX install instructions for major Linux distributions

Another in my series of 6 months from now posts.

There is a Linux client for Checkpoint’s ssl extender vpn. The binary is called snx and it works quite reliably after you get over the problems of getting it installed. The first thing you need is the software itself which you will need to get from Checkpoint. The install is easy enough, just run the install script

./snx_install.sh

or if you want a bit more feedback you can run

sh +x ./snx_install.sh

This shell script contains an embedded tar file which installs the snx binary as /usr/bin/snx. To run the vpn script simply type

user@pc:~$ snx

If all goes well then you should see the SNL login screen as shown here:

Check Point's Linux SNX
build XXXXXXXXX
Please enter your password:

SNX - connected.

Session parameters:
===================
Office Mode IP      : xxx.xxx.xxx.xxx
DNS Server          : xxx.xxx.xxx.xxx
Secondary DNS Server: xxx.xxx.xxx.xxx
DNS Suffix          : example.com
Timeout             : x hours

Now we get onto the if things don’t go well – which for me has been the default scenario.

We have the famed snx: error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory bug.
On Debian Sid you can simply install the correct library

$ aptitude install libstdc++5

To get around this on ubuntu download an older package.

$ wget http://nl.archive.ubuntu.com/ubuntu/pool/universe/g/gcc-3.3/libstdc++5_3.3.6-17ubuntu1_i386.deb

I extracted the debian package first to see what I was about to install.

$ dpkg-deb --extract libstdc++5_3.3.6-17ubuntu1_i386.deb ./
$ find
.
./usr
./usr/share
./usr/share/doc
./usr/share/doc/libstdc++5
./usr/share/doc/libstdc++5/TODO.Debian
./usr/share/doc/libstdc++5/copyright
./usr/share/doc/libstdc++5/README.Debian
./usr/share/doc/libstdc++5/changelog.Debian.gz
./usr/lib
./usr/lib/libstdc++.so.5.0.7
./usr/lib/libstdc++.so.5
./libstdc++5_3.3.6-17ubuntu1_i386.deb

Nothing too strange there so I then installed the package

$ dpkg -i libstdc++5_3.3.6-17ubuntu1_i386.deb

and after that snx works just fine …..

Edit2:
…. Until you try and do this on an AMD64/x86_64 computer. The steps above are the same except that you need to first install the amd64 version of gcc 3.3 as well.

dpkg -i gcc-3.3-base_3.3.6-15ubuntu4_amd64.deb
dpkg -i libstdc++5_3.3.6-15ubuntu4_amd64.deb

One extra step is to also install the 32 bit libstdc libraries as snx is compiled as a i386 application.

dpkg-deb -x libstdc++5_3.3.6-17ubuntu1_i386.deb ./tmp
cp -v  x/usr/lib/* /usr/lib32/

Shouts go out to Husain Al-Khamis for this one.

and after that snx works just fine …..

until you update to kernel 2.6.32-21-generic which happened to me when I updated to Kubuntu 10.04 LTS.

I got the error message that there is no tun available. This is because the generic kernel was shipped without the tun.ko module that snx (and many other vpn’s ) use to create a virtual network interface.

Luckily the user kazersozet posted a fix which I’m copy and pasting below. The basic fix is supplied at your own risk.

sudo apt-get install build-essential linux-headers-`uname -r`
mkdir faketun
cd faketun
echo -e "#include
\nstatic int start__module(void) {return 0;}\nstatic void end__module(void){return;}\nmodule_init(start__module);\nmodule_exit(end__module);">tun.c
echo -e "obj-m += tun.o\nall:\n\tmake -C /lib/modules/\$(shell uname -r)/build/ M=\$(PWD) modules\nclean:\n\tmake -C /lib/modules/\$(shell uname -r)/build/ M=\$(PWD) clean\nclean-files := Module.symvers">Makefile
make
sudo install tun.ko /lib/modules/`uname -r`/kernel/net/tun.ko
sudo depmod -a
sudo modprobe tun

Edit: Please see the comments by Ove – for some reason wordpress is putting in a space see the origional post.

Edit3: I’ll just link to the Makefile and tun.c files.

It first installs the applications needed to compile software. Then it creates two files called tun.c (the source code for the new module) and Makefile (the instructions on how to compile it) in a new subdirectory called faketun. Then it uses the make command to compile the software and the it installs it into the correct directory. It then runs depmod to update modules dependencies and finally it installs the new kernel module.

Posted in General, snx | Tagged , | 19 Comments

HPR: Bash Loops

I have been thinking about doing a small episode on bash loops for some time after I heard the guys at the The Linux Cranks discussing the topic. Well I found some time to record the show on my way to Ireland for the weekend. The show is recorded in a Airbus A320-200 !

OK so not the best audio quality but follows in the long history of linux podcasting

The show is available on the Hacker Public Radio website.

Here are the examples I used in the show.

user@pc:~$ for number in 1 2 3
> do
> echo my number is $number
> done
my number is 1
my number is 2
my number is 3

user@pc:~$ for number in 1 2 3 ; do echo my number is $number; done
my number is 1
my number is 2
my number is 3

user@pc:~$ cat x.txt|while read line;do echo $line;done
one-long-line-with-no-spaces
one long line with spaces

user@pc:~$ for line in `cat x.txt`;do echo $line;done
one<-long-line-with-no-spaces
one
long
line
with
spaces

Posted in Podcasts | Tagged , , | Leave a comment

HPR – cron

In this months Hacker Public Radio is on cron. First let me list out some good introductions to the topic:

  • https://help.ubuntu.com/community/CronHowto
  • http://unixhelp.ed.ac.uk.CGI/man-cgi?crontab+5
  • http://unixgeeks.org/security/newbie/unix/cron-1.html
  • http://en.wikipedia.org/wiki/Cron
  • Virtually the entire show was taken from these links so rather than repeat myself here I suggest you either listen to the show or follow the links yourself. I did give an example of how you might add a script to cron and I’ll list that here.

    username@computer:~$ vi /home/username/bin/hello.bash

    username@computer:~$ cat /home/username/bin/hello.bash
    #!/bin/bash
    echo "hello world"

    username@computer:~$ /home/username/bin/hello.bash
    bash: /home/username/bin/hello.bash: Permission denied

    username@computer:~$ chmod +x /home/username/bin/hello.bash

    username@computer:~$ /home/username/bin/hello.bash
    hello world

    username@computer:~$ export |grep EDITOR
    declare -x EDITOR="vim"

    username@computer:~$ crontab -l
    no crontab for username

    username@computer:~$ crontab -e
    no crontab for username - using an empty one
    No modification made

    username@computer:~$ crontab -e
    no crontab for username - using an empty one
    crontab: installing new crontab

    username@computer:~$ crontab -l
    # m h dom mon dow command
    * * * * * /home/username/bin/hello.bash > /home/username/hello.output 2>&1

    username@computer:~$ cat /home/username/hello.output
    hello world

    Following on from my recommendation of Spud Show last time I’d like to recommend Rathole Radio this time. This is a show by Dan “programmer, musician and full-time layabout from Merseyside, UK.” Lynch aka the UK Outlaw. To quote the man himself “Rathole Radio is a fortnightly Internet radio show and podcast about culture, technology and politics with humour, guests, the best in new music and even exclusive live performances.”

    I love the eclectic sound derived as a result of Dan’s simple motto “I only play stuff I like.”

    Go have a listen.

    Posted in Podcasts | Leave a comment

    Connection points in Visio

    I like to use the connection line tool when creating interrelationships between shapes in visio drawings. The advantage of this is that you can move shapes around and the line will stay connected to both. Although the line ordering will get messed up, at least you haven’t lost the logic. A connection point will appear as a white ‘x’ on your object and when you connect the line the end point will briefly turn red to indicate a join.

    visiobox-connectionpoints_0

    A problem I often come across when editing diagrams is that there are too few connection points. A rectangle shape typically has only 4 connection points but I usually want to add more connection lines. Eventually I tracked down an article entitled Work with connection points that explained how to add more connection points.

    The trick is first to select your target object, then select the “Connection point tool” – a blue “X”. You will find it on the menubar in the same drop down menu where you selected the line (Connector Tool).

    visiobox-connectionpoints_3

    Now hold down the Control key and each click will add a connection point.

    While you’re at it try giving dia a spin.

    Posted in visio | Tagged | 1 Comment

    Say Thanks.

    I was preparing to write up the scrip/show notes for my next HPR on the way home from work tonight. Rather than listen to a technical podcast I decided to put on some background music. As luck would have it spudshow 400 had just been downloaded. You may know that I spoke about spudshow before. I was having a bit of an issue with kdm loading so I was a bit preoccupied and it took me a while to tune into what Brendan was saying. Then it occurred to me that he was using the word “ken” a lot. I am used to people using “ken” in sentences and not talking about/to me as in Dutch “ken” means “know”. Anyway when my brain finally kicked in I restarted the show and listened more carefully.

    Brendan had put together a special show entitled “Ken’s Spud Show” featuring all the artists that I had liked since the start of his show. I am absolutely amazed, thrilled, chuffed, honoured and as the entire selection is from me the music selection is brilliant 🙂

    Why did he do this ? Well in 400 episodes I was the only one ever to give him any feedback.

    I don’t know what his subscriber numbers are but it wasn’t very hard to find his show so I guess I can’t be the only one subscribing. So why don’t people send in some feedback ? The guy is producing good work and yet no one could be bothered to write him an email. I always make a point to email the podcasters that I listen to because I know from doing my episodes on HPR that you get very little feedback. But when you do get an email or a comment from someone (that isn’t spam) it makes your day.

    So I would like you to make a new years resolution with me.

    Say Thanks.

    This is especially true if you’re getting something for Free. Don’t limit yourself to podcasters. If you use FLOSS take the time to look in the man page or click on the help – about to find an email address and just fire off a one liner.

    Posted in General | Comments Off on Say Thanks.

    User Agent switching for Google Chrome

    I’ve been giving google chrome a test run and so far I am undecided. It appears to be faster than Firefox but I’m not sure that it’s as fast as konqueror. One thing I noticed was how animated the whole Internet is with flash, image animations and javascript messing about. I’ll let you in on a little secret. For years I have not been seeing the Internet as most people experience it.

    • I turn image animation off in Firefox about:config then set
      image.animation_mode to once.
    • I use flash block to control which sites I allow to run flash.
    • I run the NoScript plugin to give me control over which sites I will let run javascript.

    All this leads to a more calm and much less distracting experience. I don’t block advertisements but the steps above ensure that I don’t see the hard sell ones.

    Within chrome there doesn’t appear to be any way to turn off animated images or control javascript and there is no NoScript extension as yet. Confusingly there are two FlashBlock scripts both appear to work fine.

    I was happy to see that the Xmarks (formally Foxmarks) plugin was available. I have been using it for years without problems. Since I installed it on chrome the performance has been flaky across all computers and browsers. It could be the maturity of the chrome plugin or the fact that I just doubled it’s workload by adding chrome on every computer. Either way I disabled it on chrome.

    One thing that is missing is the ability to switch User Agent string. Hard to believe that websites still think they know better than me what browser and OS I should be running. While there is no plugin or menu configuration, I found a site that explains how you can start chrome by passing it a user-agent switch.

    Typing the about: in the url bar will normally give:

    Google Chrome	4.0.249.43 (Official Build 34537)
    WebKit	532.5
    V8	1.3.18.16
    User Agent	Mozilla/5.0 (X11; U; Linux i686; en-US)
    AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.43 Safari/532.5

    After starting chrome with the command

    $/opt/google/chrome/chrome -user-agent="Mozilla/4.0 (compatible;
    MSIE 6.0; Windows NT 5.1)"

    Typing the about: in the url bar will now give:

    Google Chrome	4.0.249.43 (Official Build 34537)
    WebKit	532.5
    V8	1.3.18.16
    User Agent	Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)

    I can’t really say much more at this stage other than I hope Firefox can go back to their roots and produce a small and fast browser again. I hope Google will continue their business arrangement with Mozilla so they can do that. One thing we learned since Firefox came on the scene is that Browser competition is good for everyone.

    Posted in General | 3 Comments