December 05, 2019

Scripting discovery of random YouTube videos

(I just discovered this post as unpublished draft from about three years ago. No idea why I didn't publish it. The method described in it still works. All the code is bash script. There's a comment in the last bit of code "needs more refining" which I leave as an exercise for those so inclined.)

Do you now, or have you ever wanted to, script discovery of random YouTube videos? I did recently and couldn't find anything useful online. So I made up my own method.

If you're thinking YouTube videos are identified by 11 character strings so you can generate a random 11 character string and use that, you're not technically wrong, but it's not the way to go about it. As a test I generated 1000 and none of them were were valid. This isn't at all surprising given how many possible values those 11 characters provide. In my observation, each character in can be an lower or uppercase letter a number, or a -. That's 63 possible characters. A calculator tells me that 63^11 is 62050608388552830000. (If you want to say that out loud, say "sixty-two quintillion", then mumble a bit.)

function getVideoID  {
   local id="";
   while [ "${id}" = "" ];do
      id=$(curl -s https://www.youtube.com/results?search_query=$( < /dev/urandom tr -dc A-Za-z-0-9 | head -c4) | grep -o 'watch?v=[a-zA-Z0-9]\{11\}' | sort | uniq | sort -R | head -1);
   done
   echo "${id/watch?v=/}";
}

That gets you a valid id, such as dQw4w9WgXcQ. If you discover videos entirely at random some of what you find will be NSFW. Really. It will be. The method I use to filter out NSFW content uses youtube-dl

function getVideoUrl  {
   local url="";
   url=$(./youtube-dl --age-limit 0 --get-url "${1}");
   echo "${url}";
}

videoID=$(getVideoID);

videoUrl=$(getVideoUrl "${videoID}");

If ${videoUrl} is not zero length then, in my experience at least, the video is SFW and it's value is an url of the raw video which could be used as input value for ffmpeg or whatever. (To emphasis, it is *my experience* that this method filters out NSFW content.) If you just want to download the whole video, youtube-dl can do that for you. (youtube-dl will find the highest quality version of the video by default. You may want to change that depending on your available bandwidth or what you intend to do with the video.)

Some videos on YouTube have a video component that is just a static image. E.g. someone's ripped an album and then combined the audio with the album cover art to create something that can be uploaded to YouTube. Such videos are visually uninteresting and maybe you want to identify those videos and discard them rather than use them in whatever it is you're doing that involves random YouTube videos. I did, so I worked out a way of doing that too. The method I've used is to generate a bunch of images from the video, then compare them in a way which gets a value that represents how much the images differ by. If that value is less than a certain value, discard it. I've used GraphicsMagick for comparing the images. ImageMagick can be used to but is slower. (The less powerful your hardware, the bigger the speed difference is. ImageMagick output is slightly different to GraphicsMagick so you can't just remove the "gm", the awk and cut arguments would need changing.) To extract the images you obviously first have to download the video and in the below the downloaded video is theVideo.mp4

# generate an image at 2 second intervals
ffmpeg -loglevel fatal -i theVideo.mp4 -vf fps=1/2 -y foo__%02d.jpg

if [ $? -eq 0 ];then

  # get an integer value that represents how different the images all are to each other
  v=$(gm compare -metric MAE foo__*.jpg null:-  | grep Total | awk '{print $2}' | cut -d . -f 2);

  if [ ! -z "${v}" -a "${v:0:2}" != "00" -a "${v:0:2}" != "01" ];then **** needs more refining 018 019 OK 010 not OK maybe test 3rd char too
    # the video isn't a static image
    # do whatever it is you want to do with it
  fi

fi

I arrived at discarding videos where the first two characters of v are 00 after calculating v for a bunch of videos.


March 05, 2017

VMware Horizon Client and The Installation Was Unsuccessful

Are you trying to install the VMware Horizon Client for Linux, having previously uninstalled it, and are finding that the installer exits immediately after asking you questions with the utterly useless error "The Installation Was Unsuccessful" and no clue at all as to why? If so check to see if you have a directory called /usr/lib/vmware-installer-horizon and if you do, delete it. Deleting that directory is what made the installer work for me. It figured it out all by myself y'know. I found absolutely nothing of any use online, hence this blog post which might one day mean someone finds the aforedescribed scenario less utterly infuriating than I did.

I encountered the problem with VMware-Horizon-Client-4.3.0-4710754.x64.bundle on Fedora 25. Fedora 25 isn't supported, the installation works but vmware-view then doesn't run. I got it to run by doing

[root@boy ~]# cd /usr/lib64/
[root@boy lib64]# ln -s libudev.so.1 libudev.so.0

January 11, 2017

Quick and easy way to get ffmpeg on Raspberry Pi runing Raspbian

The Raspbian repos have avonv instead of ffmpeg. If, like me, you want ffmpeg because it has some functionality not available in avconv then you can use static builds. Go to https://ffmpeg.org/download.html#build-linux click the 'Linux Static Builds' link and so on. If you have a Raspberry Pi 3 get the armhf build. If you have one of the first generation Pi you need the armel build. I don't have a Pi 2, but if you do why not try both builds and leave a comment about which one works.

There's a bunch of blogs posts about getting ffmpeg on Raspbian by compiling it from source. I don't know if that'd result in a binary more optimised for the Pi than the static builds referred to above. I've found the performance of the static builds adequate enough that I haven't bothered trying to build it. I suspect that doing little more than a git checkout and build, as all the guides I found described, would result in a binary with the functionality I want. It'd take a while to find out given how long a build would take on the Pi, especially a first generation. Though you could do cross compiling on an x86 machine if you were so inclined.

First generation Pi running Raspbian 8

pi@pione:~ $ ffmpeg-3.1.4-armel-32bit-static/ffmpeg
ffmpeg version 3.1.4-static http://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2016 the FFmpeg developers
  built with gcc 5.4.1 (Debian 5.4.1-2) 20160904
  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --enable-libmp3lame --enable-libx264 --enable-libwebp --enable-libspeex --enable-libvorbis --enable-libvpx --enable-libfreetype --enable-fontconfig --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvo-amrwbenc --enable-gray --enable-libopus --enable-libass --enable-gnutls --enable-libvidstab --enable-libsoxr --enable-frei0r --enable-libfribidi --disable-indev=sndio --disable-outdev=sndio --enable-librtmp --cc=gcc-5 --disable-ffplay
  libavutil      55. 28.100 / 55. 28.100
  libavcodec     57. 48.101 / 57. 48.101
  libavformat    57. 41.100 / 57. 41.100
  libavdevice    57.  0.101 / 57.  0.101
  libavfilter     6. 47.100 /  6. 47.100
  libswscale      4.  1.100 /  4.  1.100
  libswresample   2.  1.100 /  2.  1.100
  libpostproc    54.  0.100 / 54.  0.100
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...

Use -h to get full help or, even better, run 'man ffmpeg'
pi@pione:~ $ 


Pi 3 running Raspbian 8

pi@pithree: $ ffmpeg-3.2.2-armhf-32bit-static/ffmpeg
ffmpeg version 3.2.2-static http://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2016 the FFmpeg developers
  built with gcc 5.4.1 (Debian 5.4.1-3) 20161019
  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
  libavutil      55. 34.100 / 55. 34.100
  libavcodec     57. 64.101 / 57. 64.101
  libavformat    57. 56.100 / 57. 56.100
  libavdevice    57.  1.100 / 57.  1.100
  libavfilter     6. 65.100 /  6. 65.100
  libswscale      4.  2.100 /  4.  2.100
  libswresample   2.  3.100 /  2.  3.100
  libpostproc    54.  1.100 / 54.  1.100
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...

Use -h to get full help or, even better, run 'man ffmpeg'
pi@pithree:~/RandomYouTubeTwitterBot $ 


July 26, 2016

Codec support for openSUSE Leap 42.1

Do you use openSUSE Leap 42.1? Do you want Twitter to stop displaying "This browser does not support video playback." when you're looking at it in Firefox? Do you want support for stuff like watching DVDs and H.264/ACC in an mp4 container in gstreamer (used by the GNOME Videos application (Totem), Parole and others)? Do you want to do these things without polluting your install by adding third party repositories and replacing packages provided by openSUSE? Do you want to do that stuff on a machine you don't have root on? If you answered yes to any of the previous questions, keep reading.

For some years I used to maintain machines running SUSE Linux Enterprise Desktop and rolled my own solution for adding codec support to them by way of a single package that doesn't conflict with anything provided by SUSE. (The last iteration can be found at https://www.suse.com/communities/blog/additional-multimedia-codec-support-sled-12/) Having installed openSUSE Leap 42.1 I found that the recommend method for adding codec support was a page which said something like "this isn't available for technical reasons try this other place" and that other place talked about the phonon backend with no mention of gstreamer. Then I decided to build my own solution for openSUSE too. You can get it by clicking Stuff to add codec support to openSUSE Leap 42.1


You need to read the README.txt file for full details, but to give you an idea of what’s involved, the build process is as follows:

$ ./build


It'll tell you if there are packages you need to install. Install those, then run the script again. By default the plugins will be built to live in /opt/multimedia If you want them to live somewhere else then change the line

prefix=/opt/multimedia;

to reflect where you want to put them. E.g if you want them in your home directory you could use

prefix="${HOME}/.multimedia";

By default an rpm will be built but if you set the prefix to something in your home directory the rpm won’t be built as it’s assumed you’re specifying your home directory as the prefix because you don’t have root and hence can’t install an rpm.

What the script basically does is build a bunch of gstreamer plugins, stick them somewhere they don't clash with what's in openSUSE packages, and put something in place so gstreamer can find them. Making Firefox play videos in Twitter rather than display the "This browser does not support video playback." is done by including ffmpeg, which Firefox will use for video playback if it's installed. (Far as I can tell, the significant file is libavcodec.so. The ffmpeg binaries like ffmpeg, ffserver etc are also included.)

There's nothing for hardware decoding of H.264 included. I have an Nvidia card and use the propriety Nvidia drivers. I can get hardware acceleration for H.264 by installing gstreamer-plugins-vaapi which is in the standard Leap 42.1 repos. Unfortunately installing it renders Totem unable to play H.264 video. It displays black for a few seconds than borks. The version of gstreamer-plugins-vaapi included in Leap 42.1 is 0.5.10. I found that 0.7.0 is the latest version that will build with gstreamer 1.4.5 that's included in Leap, but that didn't work any better for me. (Seems it did for this guy though https://blogs.gnome.org/ovitters/2015/12/23/hardware-accelerated-video-playing-with-totem/bl ). Parole, the XFCE video application, works though. It uses GTK and gstreamer and works fine in GNOME. If you want to get minimalist about it, you could also use gst-play-1.0

$ gst-play-1.0 --interactive video.mp4

July 12, 2016

Fun with spacewalk–repo–sync fail

This is one of those posts for the sake of replicating information that it took me far too long to find online and then only in one place, so maybe someone else finds this and it helps them.

I was trying to mirror a repo with Spacewalk and spacewalk-repo-sync was failing

[root@thing ~]# spacewalk-repo-sync -c  gitlab-centos7-x86_64 
#### Channel label: gitlab-centos7-x86_64 ####
Repo URL: https://packages.gitlab.com/gitlab/gitlab-ce/el/7/x86_64/
ERROR: Cannot retrieve repository metadata (repomd.xml) for repository: gitlab-centos7-x86_64. Please verify its path and try again
Sync completed.
Total time: 0:00:01
[root@thing ~]#

As messages about failure go, that's not very useful. If you know how rpm repos are organised you can work out that the full url of the repomd.xml file is https://packages.gitlab.com/gitlab/gitlab-ce/el/7/x86_64/repodata/repomd.xml but it's not stated in the output. Also there's no information at all about why repomd.xml couldn't be retrieved. Unhelpfully, spacewalk-repo-sync lacks any options that can be used to provide any kind of additional information about exactly what it's doing and what might have gone wrong. (For Google bait I'll include the words verbose, debug and debugging at this point.) I did know that spacewalk-repo-sync worked for other repos that were set up in Spacewalk, so it had to be something about that repo which it didn't like. Eventually I discovered https://www.novell.com/support/kb/doc.php?id=7014059 which includes this gem:

In case the above recommended settings do not solve the issue, please run:


export URLGRABBER_DEBUG=DEBUG
spacewalk-repo-sync -c <channelname> <options> > /var/log/spacewalk-repo-sync-$(date +%F-%R).log 2>&1

So I ran

[root@thing ~]# URLGRABBER_DEBUG=DEBUG spacewalk-repo-sync -c  gitlab-centos7-x86_64 

and got lots of output which revealed the "Cannot retrieve repository metadata" error was caused by an SSL certificate not being trusted. Knowing that I was able to determine there was an issue with the ca-bundle.crt on the server I was running spacewalk-repo-sync on. Once I'd fix that spacewalk-repo-sync was able to mirror the repo in question.

Once I knew that spacewalk-repo-sync uses urlgrabber I could effecively replicate the issue by running

[root@thing ~]# URLGRABBER_DEBUG=DEBUG urlgrabber https://packages.gitlab.com/gitlab/gitlab-ce/el/7/x86_64/repodata/repomd.xml

Right now, even knowing that spacewalk-repo-sync uses urlgrabber, I can find barely anything online about how to find out why spacewalk-repo-sync is failing.


May 15, 2015

Because 'dconf update' looks at modification time of directory not files

Are you making modifications to a dconf profile, running 'dconf update' and wondering why the settings aren't applying? I was. And this is why…


I have some dconf settings that I want to set conditionally and that's done by writing relevant value to a file like this:

#!/bin/bash

F=/etc/dconf/db/foo.d/blah

if [ "$(something)" = "yes" ];then
   V='true';
else
   V='false';
fi

cat > "${F}" << EOF
[what/ever/]
key=${V}
EOF

dconf update

/etc/dconf/db/foo.d/blah was getting updated, but the setting wasn't being applied. I noticed the binary database /etc/dconf/db/foo wasn't being updated, which was evident by the modification time stamp not changing after 'dconf update' was run.

Eventually I discovered that 'dconf update' doesn't look at the file modification times, it looks at the modification time of the directory containing the files. See https://bugzilla.gnome.org/show_bug.cgi?id=708258 Changing the contents of the file by writing to it with cat doesn't cause the modification time of the enclosing directory to change. So I needed to also change the modification time of foo.d, which can be done with touch

#!/bin/bash

F=/etc/dconf/db/foo.d/blah

[ as above cut for brevity ]

touch $(dirname ${F})
dconf update

With that additional touch command, things started to work as desired.


October 05, 2014

Matlab Log File 'Art'

Sometimes I like to create an image from a dataset, just because. (Previously http://blogs.warwick.ac.uk/mikewillis/entry/useless_visualisation_of/) Also a few weeks ago I was looking at Matlab log files a lot (http://blogs.warwick.ac.uk/mikewillis/entry/fun_with_flexlm/). And thus, this

26th July 2014. Solarized colours.

(Click to embiggen.)

It's generated from Matlab license check outs for a single day. The image consists of 60 concentric circles split in to 24 segments. Each circle represents one minute, the innermost circle being 0 and the outermost 59 minutes past the hour. Each segment represents one hour. Midnight is where 12 would be on a clock face, noon is where 6 would be. A coloured segment indicates that a license was checked out during that hour. The distance from the centre represents the minutes past the hour when the license was checked out. Each segment is drawn with an opacity of 25%. The brighter the segment the more licenses checked out, though the brightness tops out at four licenses. (Finer graduations would mean that segments representing a single license would be really faint.) For example, the image below shows from inner to outer:

  • 1 license checked out at N minutes past midnight then being checked in sometime between 01:00 and 02:00
  • 2 licenses being checked out at N+1 minutes past 01:00 with one checked in again during the same hour and no check in time being found for the other license.
  • 4 licenses being checked out N+2 minutes past 02:00 then checked back in again sometime in the same hour (not necessarily at the same time).

Matlab log file art example image.

There is an element of doubt around tracking when a given license is checked in again. The Matlab license server log does not allocate any sort of identifier to a license check out so it's impossible to definitively identify when it was checked in again. I have taken the check in time to be the first time that a check in by the user@host combination occurs after they checked out a license. A user checking out multiple licenses from a single host could make that assumption incorrect.

The colours are the accent colours of the solarized palette http://ethanschoonover.com/solarized

Here's an image from the same data using colours used by Pirelli to denote the different compounds of their Formula 1 tyres.

26th July 2014. Pirelli Formula 1 tyre compounds colours.

This is with the colours of the RAF roundel. (Things which are round…)

26th July 2014. RAF roundel colours.

I was going to try doing one with University colours. Then I discovered the Corporate Identity part of the University website, which used to provide details of a colour palette for use in things University related, currently only provides details for a single shade of blue.

The images are generated using a bash script and ImageMagick. The script draws up to 5000 segments at a time. Initially it drew one at a time but it's a lot quicker drawing multiple segments at the same time. 5000 seemed like a nice number that didn't trip that error you get when bash command arguments are too long. It's nowhere near 5000 times quicker to draw segments 5000 at a time. This is due, to some degree I don't care enough to work out even roughly, to the temporary images being stored as mpc (http://www.imagemagick.org/Usage/files/#mpc) on tmpfs, thus minimising I/O overheard. (I (mis)use /dev/shm for this sort of thing since it's already there and usually has enough space.) Images are generated at 10000x10000 then shrunk. This is done to remove small unwanted artefacts which sometimes show up between adjacent segments in the same circle. Like this

Matlab log file art artefact example.

As that example shows, they don't appear consistently and I'm not sure why they do. I can't make them not occur without leaving gaps. If the images are generated at 1000x1000 the artefacts show up. If the images are generated at 10000x10000 the artefacts show up, but conveniently this detail is lost when the image is shrunk to 1000x1000.

Other examples which I find less aesthetically pleasing than the one linked here can be seen at http://blogs.warwick.ac.uk/mikewillis/gallery/matlab_log_file_art/


August 25, 2014

Fun with flexlm log files

All three scripts referred to in this post can be found in Fun scripts for processing flexlm log files and have been tested on Linux, Mac OS X and Solaris.

Edit @ 21/11/2014. If your flexlm file comes from a Windows machine, run it through dos2unix first, otherwise the output will have newlines in places you don't want them.

Edit @ 06/11/2016. Fixed processor count being wrong the the processor model name includes the word "processor". Thanks to Chris Tothill for pointing that out.

I've recently found myself looking at the flexlm log for a Matlab license server a lot. I've been mainly wanting to know two things, when did a given person check out a license and who has checked out licenses between two dates. The format of the flexlm license file makes it quite annoying to answer those questions because the lines for people checking out licenses in and out don't contain date info. All you get is something like this

11:55:52 (MLM) OUT: "MATLAB" ringo@blerg

There are occasional lines in the log that give you the date

18:00:54 (MLM) TIMESTAMP 3/14/2013

So if you want to know the date of a particular entry, you can backtrack from it until you find a TIMESTAMP line. It's cumbersome and annoying and gets more so the busier the license server is. I was dealing with a log file that had grown to over 3million lines covering about 17 months. Sometimes the preceding TIMESTAMP is over 500 lines away. Also the date in the TIMESTAMP lines is in the US format month/date/year, which my brain finds hard to deal with. It's a stupid way to write the date. It's not big endian, it's not little endian, it's just a mess that causes endless grief for people not in the US.

Unable to Google up any such pre-existing thing, I ended up writing a script flexlm_add_dates_to_log to parse the log file and prepend dates to all the lines. (As Google bait I'll also say that it adds the date and that you want to add the date.) The result looks like:

2013-03-18 01:06:12 (MLM) TIMESTAMP 3/18/2013
2013-03-18 01:06:15 (MLM) OUT: "MATLAB" george@blah1
2013-03-18 01:06:15 (MLM) IN: "MAP_Toolbox" ringo@blah2
2013-03-18 01:06:16 (MLM) IN: "MATLAB" john@blah3
2013-03-18 01:06:16 (MLM) OUT: "MAP_Toolbox" paul@blah4

Output is written to stdout, you'll probably want to dump it to a file in a suitable location for later analysis.

Figuring out which date to prepend turned out to be a trickier than it first appeared. At first glance you just go through the file one line at a time and every time you find a TIMESTAMP you write that date to the start of the following lines. But TIMESTAMP lines are written at an intervals of six hours. So you end up with sections of the log that look like this

22:26:18 (lmgrd) TIMESTAMP 5/22/2013
23:19:37 (MLM) OUT: "MATLAB" ringo@blah1
23:21:07 (MLM) OUT: "MAP_Toolbox" john@blah2
23:45:24 (MLM) IN: "MATLAB" paul@blah3
23:45:24 (MLM) IN: "MAP_Toolbox" george@blah4
0:19:37 (MLM) OUT: "MATLAB" ringo@blah1
0:21:07 (MLM) OUT: "MAP_Toolbox" john@blah2
0:45:24 (MLM) IN: "MATLAB" paul@blah3
0:45:24 (MLM) IN: "MAP_Toolbox" george@blah4
4:26:18 (lmgrd) TIMESTAMP 5/23/2013

Well not exactly like that, for one thing MLM also writes TIMESTAMP lines and secondly I've copy/pasted some lines and changed the hour, but it illustrates the problem. The first TIMESTAMP in that example is on the 22nd. So in the above example the 22nd would be prepended to all the IN and OUT lines. But look at the time of the last four IN and OUT lines, they actually happened on the 23rd. The way I dealt with that was to also track the hour. When a TIMESTAMP is found the hour is noted and stored as $hour_last_verified. When an IN or OUT line is encountered, the hour is extracted and if it's less than $hour_last_verified, e.g. $hour_last_verified is 23 and the hour just found is 0, then the date prepended is incremented by a day. (It's actually helpful that the time format used is a half arsed 24 hour clock, half arsed because the hour value isn't padded with a leading zero yet the minutes and seconds are. So you don't have mess around converting 02 in to 2 to do an integer comparison. My script pads the hour value in the ouput for neatness.)

Incrementing the date by a day turned out to be surprisingly problematic too. The script is written in bash, so the obvious way to do something like figure out what the day after a certain date is is to call the date command. This is very easy with GNU date, but sadly Mac OS X and Solaris don't ship with GNU userland and whatever date command they ship with lacks the functionality you'd use with GNU date. I decided I wanted to make life difficult for myself by having the script work on Linux, Mac OS X and Solaris. I then spent far longer than was sensible trying to figure out a single way of doing the date calculation that would work on Linux, Mac OS X and Solaris without resorting to python or perl. I've concluded it's impossible. So the script looks to see if it can find what appears to be GNU date installed somewhere, then the date calculation is done by trying to do it the GNU way then if that fails, doing it another way. (If you want to get GNU userland on Mac OS X, install MacPorts and then install the coreutils package.)


It turned out flexlm_add_dates_to_log can take a while to run if you have a very large file, like the 3million line long one I had. (In my tests anything from 45 minutes upwards.) So I wrote a wrapper script for it flexlm_add_dates_to_log_multi_thread_wrapper which splits the log file in to N chunks, where N is the number of CPU cores, processes them simultaneously then outputs the result to stdout. In my tests it's up to 80% quicker. As with flexlm_add_dates_to_log, this was also more problematic than I initially expected, but I won't bore you with how.


The question of who has checked out license between two dates is actually possible to answer from the raw flexlm log. I figured out how to do it with sed and a regular expression that matched TIMESTAMP lines, but then I negected to save a copy of the regex anywhere and also it required the date to be provided in the US date format. The script I wrote to do it using the output of flexlm_add_dates_to_log is flexlm_checkout_between_dates. It takes input from a file so you'll have to write the output of flexlm_add_dates_to_log to a file first. I didn't bother making flexlm_checkout_between_dates read from stdin because it would be very inefficient to keep running flexlm_add_dates_to_log then piping the output in to flexlm_checkout_between_dates.


January 14, 2014

Webcam timelapse – 2013

Follow-up to Webcam timelapse – January 2013 and February 2013 from Mike's blag

Yep, this is the whole of 2013 through the University webcam. I say whole. There will be a few small gaps of a minute or so here and there because the machine on which the script that grabbed the images was running did not have 100% uptime. Also it seems that at 15:23:26 on 19th March the webcam crashed or something because all the images from that time until 11:35:01 on 21st March are the same.

As with previous timelapses, images were grabbed from webcam once per minute. The video is made with 48 images per second. Each day lasts about 29 seconds and the video is 2 hours 59 minutes and ~1.2GB. No, I haven't sat and watched it all the way through.


Download


I put the video together by making a video for each day then joining them up. It could be done all in one go but making separate videos means it's easier to spot issues. For example I noticed the video for 20th March had a considerably smaller filesize than the others and that the videos for 19th and 21st were also slightly smaller than average. It also reduces the risk of leaving something running, checking it two hours later and finding all the output is garbage.

I used ffmpeg. The command for each day's video looks like

$ ffmpeg -r 48  -pattern_type glob -i '*.jpg' -an -vcodec libx264 -f mp4 -threads 0  -b:v 1000k foo.mp4

It took about two hours to generate all the videos on a 2.8Ghz Intel Core 2 Quad. (A single video took about 16 seconds. On a 1.6Ghz Intel Core Duo a single video took about five and half minutes and on an ARM Marvell Kirkwood 1.2GHz it took about 42 minutes.)

To join them up you need to make a list of all the filenames

$ for i in mp4s/*;do echo "file '${i}'" ;done > list.txt

Then use ffmpeg's concat demuxer

$ ffmpeg -f concat -i list.txt -c copy  -movflags faststart webcam2013.mp4

The -movflafs faststart argument tells ffmpeg to 'Run a second pass moving the index (moov atom) to the beginning of the file.' This means that when the video is viewed in a web browser playback can start straight away rather than waiting for the entire video to be downloaded.


October 02, 2013

Start.Warwick iOS/Android App

Writing about web page http://www2.warwick.ac.uk/services/its/servicessupport/web/mobileapps/start

Whilst out and about on Campus on Monday I was stopped several times by people asking if I knew where various rooms are. I wasn't much help. I have no idea where 'L4' might be and I've worked here for years. I've since discovered that the Start.Warwick App has a Campus Map feature which you can give the name of the room and it shows you where it is. So if you're one of the many new people wandering around campus and very understandably feeling a bit lost, take a look.


Search this blog

Tags

RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV