Triple 4k monitors

Well, shit kinda got out of hand, and now I have three 4k monitors on my desk.

I didn’t intend for this to happen. What happened was I thought that one of the monitors that I use for my computers on the bench and in the booth was broken. So I hastily ordered a replacement. Then it turned out that that monitor wasn’t broken — the power cord had fallen out — but by the time I had discovered that eBay was telling me it was too late to cancel my order.

So for a lousy AU$250 I now have a third monitor attached to my primary workstation. Since I have pretty much no need or use for a third monitor what I have done is create a desktop background image for it with a bunch of reference material.

The smallest and worst HDMI display ever

This is great: The smallest and worst HDMI display ever. There’s a write up over here. This guy has plugged an OLED device directly into the HDMI port on his laptop. On a related note, I thought Craig would appreciate this one: Building a tiny steampunk “HDMI” display from the same author (recommend watching at 2x speed).

Watching the web-logs on all of my servers in real time

I have a computer sitting on my desk that is always on (it’s my file server) and it has a monitor attached which is almost never in use (I ssh to that server if I want to do things so it’s hardly ever logged in).

I thought it would be cool if on that monitor the web-logs from all of the systems I manage were shown so I could keep an eye on things and maybe learn a thing or two about my web-sites and how people are using them.

So the first thing I did was write a script to grab any given web log:

root@orac:~# cat /root/get-web-log.sh
#!/bin/bash
echo Starting download of $3...
while : ; do
  su -c "ssh $1 tail -f /var/log/apache2/$2 < /dev/null" jj5 \
    | tee -a /var/log/web.log \
    | grep --line-buffered -v "Mozilla.5.0 .compatible. Googlebot.2.1. .http...www.google.com.bot.html." \
    | grep --line-buffered -v "Baiduspider...http...www.baidu.com.search.spider.htm." \
    | grep --line-buffered -v "Mozilla.5.0 .compatible. Baiduspider.2.0. .http...www.baidu.com.search.spider.html." \
    | grep --line-buffered -v "Mozilla.5.0 .compatible. Exabot.3.0. .http...www.exabot.com.go.robot." \
    | grep --line-buffered -v "Mozilla.5.0 .compatible. YandexBot.3.0. .http...yandex.com.bots." \
    > /var/log/web/$3
  sleep 60
  echo; echo; echo Restarting download of $3...; echo; echo;
done

Then I wrote a series of scripts which call the get-web-log.sh script for specific web-sites on specific servers, e.g.:

root@orac:~# cat /root/web-log/get-jsphp.co
#!/bin/bash
/root/get-web-log.sh honesty www.jsphp.co-access.log jsphp.co
exit

Then I wrote a main script, rather unoriginally called info.sh, that kicks off the web logs downloads and then monitors their progress as they come through:

root@orac:~# cat /root/info.sh
#!/bin/bash

# disable the screensaver
setterm -blank 0 -powersave off -powerdown 0

# start downloading the web-logs
cd /root/web-log
./get-jsphp.co &
sleep 1
#...all the other downloaders, one for each site

# watch the web-logs
cd /var/log/web
tail -f *

# stop downloading the web-logs
kill %1
#...all the other kills, one for each downloader

exit

Then I edited /etc/init/tty1.conf so that on tty1, instead of having a login console, I automatically ran my info.sh script:

root@orac:~# cat /etc/init/tty1.conf
# tty1 - getty
#
# This service maintains a getty on tty1 from the point the system is
# started until it is shut down again.

start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]

respawn
#exec /sbin/getty -8 38400 tty1
exec /root/info.sh < /dev/tty1 > /dev/tty1 2>&1

And that was it. The only trick was that I needed to disable the screen saver (as shown in the info.sh script) so that the screen didn’t constantly blank.

And now I can watch the web activity on all of my sites in real time.