I am an extreme moderate

April 26, 2011


Filed under: Uncategorized — niezmierniespokojny @ 8:35 pm

I’ve been blessed with a talent to find bugs. And sometimes I wonder…why me? Why can’t things that I touch just work?

Got a netbook on Sunday. Came with Windows 7, so the first thing to do was install some good operating system. I thought about Lubuntu. I ducked it quickly and found a guide for Ubuntu, Lubuntu is very similar, so it should just work, right?
I downloaded Lubuntu alternate CD (’cause I wanted full disk encryption) and following the guide, created a startup thumb drive with Universal USB Installer, plugged it in and started installation. Soon I found that the installer just *knows* it runs from a CD and searches for optical drive. And refuses to continue w/out one. I ducked it too and indeed, it’s a known problem, Ubuntu alternate CDs have it too. So I decided to temporarily resign from encryption and downloaded a regular Lubuntu CD, put it on the thumb drive with Universal USB Installer and started installation…to be stopped exactly the same way as before.
OK, Mint LXDE?
The installer didn’t boot.
On the way I tried to replace Universal USB Installer with the Ubuntu app. It didn’t work, the dialog box in which I was supposed to select a CD image didn’t show any images (or any files whatsoever). Manually typing file name didn’t help.
So w/out a better option I decided to use regular Ubuntu (netbook edition). As soon as the installer started booting, I was welcomed with a Mint name. What? It got me thinking…seems that Universal USB Installer is screwed and when there’s some OS on a drive, it doesn’t replace it correctly.
So I formatted the thumb drive and put regular Lubuntu on it. Installation went smooth, except that the partitioning tool is (probably) inconsistent on using decimal vs. binary megabytes. When selecting partition size it shows some number, but when you return to the list of partitions the number is a bit lower.
After installation, when the system was restarting, it hung soon before it was supposed to shut down. So I helped it a little.
Starting up – I was shown Ubuntu welcome screen. I guess that with incomplete drivers (remember, I was supposed to install them. Chipset-GPU ones) it shows some different welcome screen and Lubuntu folks forgot to replace it.
When the system booted, I opened the start menu to learn that it was empty. No programs, just “Run” command.

At this point I was too disheartened to try to fix it. I’ll probably reinstall it….but not yet.

April 13, 2011

64k is the new 128k

Filed under: Uncategorized — Tags: — niezmierniespokojny @ 1:57 pm

Members of an audiophile forum, Hydrogenaudio made a test of audio codecs at 64 kbps. If you haven’t followed the progress in this field (like I didn’t), you’re probably surprised why did audiophiles choose something as low.
The answer is simple: modern codecs (like aoTuV ogg Vorbis) are practically transparent at 128k – that is you can’t tell the difference between the original and encoded files.
And when you look at the 64k results, when listeners rated samples from 1-5, the winner, CELT (soon to be renamed to Opus), got an average score of 4. Not bad for 64k, huh? Certainly overkill for my phone player.

What makes me sad is that ogg Vorbis was a clear looser, not only it used the most space (all codecs are VBR, so exact bitrate differs), but also got the worst sound. It’s the top dog at 100+k, but apparently it doesn’t scale to bitrates this low. This makes me question Google’s choice to allow only Vorbis in its latest WebM video format with no plans for more. It makes it easier to ensure decoders are compatible and Vorbis is great for HD movies, but there are many uses for codecs that trade quality for size and there are more than a few around the web.
I really hope Matroska adds support for VP8 (WebM video codec) in MKV files…

And I wonder what comes next. How far can we squeeze bitrates? How far will we squeeze bitrates? Disk space keeps getting cheaper, so do bandwidth and CPU power, so incentives get smaller and smaller; even if there’s plenty of space for improvement it will stop being worth it.

April 1, 2011

Flawed testing, part 1

Filed under: Rants — Tags: — niezmierniespokojny @ 10:03 am

I see there are common errors in benchmarking stuff.
The first thing is looking too close at arbitrary data points.
Let’s look at Computer Language Benchmarks Game for example.
It benchmarks languages for 3 major characteristics:

  • compiled (interpreted) code run time
  • compiled (interpreted) code memory usage
  • code size
  • Let’s plot size vs. speed (and ignore memory for the sake of simplicity):

    size vs. speed

    I highlighted the following:

  • C++ GNU g++
  • C GNU gcc
  • Scala
  • Lua LuaJIT
  • JavaScript V8
  • Python PyPy
  • You can see that they are Pareto frontiers, which means that each may be beaten in one metric, but not in both.

    But let’s see another visualization of the same data:

    size vs. speed - hiperbole

    As you can see, Pareto frontiers form a kind of hyperbola. It’s not well visible, because we have too little data. But such shape is a usual thing. The harder you try to optimize for speed – the more simplicity you have to sacrifice for gains that keep getting smaller. The more you simplify – the slower it works. Just regular tradeoffs.

    So where’s the problem?
    Each of the languages has its own hyperbola. There’s more than 1 way of writing a program in each of them, yet they are represented just by 1 point. What’s worse, by an extreme point. Let’s speculate about hyperbolas for some of them:

    size vs. speed - hiperboles

    The drawing may be way off, I didn’t think about it a lot, let alone test it. The problem is that implementations that are the fastest for a particular language are never even close to being efficient in terms of size. The globally fastest language (C++) is Pareto optimal. But the other data points are almost useless. Really, probably none of them would be a frontier if we allowed more than 1 score / language. They show language performance limitations. But efficiency? Not at all. So what does code size tell in this test? Nothing. Null. Nada. What does memory usage tell us in a test like this? Exactly the same.
    Though in some cases it’s somewhat better – if you can use the benchmark data to estimate how would it work in your case and know ‘Even when optimized totally for parameter X, parameter Y is fair enough’.

    And the problem is common. The Computer Language Benchmarks Game is otherwise a great resource, yet there is a major flaw in their methods.
    Just like this compression comparison. Does wavpack compress faster than flac? In a quick test that I performed – no, it’s 300 times slower. But with the compression settings that I chose it’s much stronger. Does it mean it’s much stronger? No, we’d have to draw a chart for each of them and then compare. Like it’s done here.

    flac vs. wavpack

    It shows that in fast modes, flac is both faster and stronger than wavpack, but in slower ones it’s the opposite.
    It also shows that TAK and ape are very flexible – they can perform well in a wide range of speed vs. time tradeoffs.

    I’m sure one could find many examples outside of IT too. Tradeoffs are universal. Letting user choose where to make them is not uncommon. So be careful what do you do with data that you gather.

    Create a free website or blog at WordPress.com.

    %d bloggers like this: