Zenitar 16mm/2.8 Fisheye

March 30th, 2014

For $200 you can get a capable fisheye for Nikon Canon (and perhaps other mounts). I got mine via ebay from a seller named rus_camera, delivery took less than a week.

Before buying I read through many confusing articles that questioned the optical quality, etc. One common “defect” is that the lens is delivered without the rear-end filters. The standard Zenitar comes with 4 filters made out of 2mm glas (1 UV, green, red, yellow-greenish). The lens is adjusted for these 2mm glass being in the optical path - if missing the lens will front-focus.

Read the rest of this entry »

Zeta and the St. Petersburg paradox

March 21st, 2014

The St. Peterburg paradox is a hypothetical game with infinite expectation. This means: If you are offered this game: Bet everything you have.

The game is played by a bank that will throw a fair coin. The game starts by the bank putting a sum, say $1 into the pot and a player can enter the game by waging money on the game. If the coin lands Tails, the player wins the pot. If it lands Heads, the bank will double the pot. Now what is the fair price for the player.

Usually such games can be treated with simple probability theory. We compute the “average outcome” of the game.

For the first round there is a 50% chance that the bank will pay $1, so the expected value (EV) is 50cts. For the second throw the bank will pay $1 in 25% of the cases (the “other” 50% from the first throw and the 50% chance that Tails comes this round).

So as long as the game goes, the bank will pay 50cts on average. Now the problem: It can happen that Tail never comes and so the average payout for the bank will be

50cts +50cts +50cts +50cts +50cts +50cts +50cts +50cts +50cts +50cts …. = ?

This is a divergent sum, most of us have learned that this goes to “infinity” and that this is not a proper value.

So no matter how much you are asked to be allowed to play: Math says you should accept, because on average you will win an infinite amount of money. This doesn’t make sense in the real world, there is no infinite amount of money.

Lets modify the game a bit. Lets say that you can bet on the event that Tails comes only at the k-th throw. So you will win if and only if the k-th throw is Tails and all throws before where Head. What would be the fair price for such a bet: $2-k    

Now we can hedge our bets to ensure we break even: We bet all possible moves, 50cts for the first throw, 25cts for the second, 12.5cts for the third, etc. All together we pay $1 for our infinite number of bets and we will win $1 all the time. Boring. Actually very boring, because it can take an infinite time until we win our Dollar. Imagine they throw once a minute and it falls 3000 times Heads in a row - you have spent two days in the casino.

We haven’t solved the paradox yet, be just eliminated the infinite winnings, but this wasn’t the problem.

To get to big winnings, we have to place the same bet for all bets, lets calculate what we have to pay the cashier if decide to pay $x for each:

$x +$x +$x +$x +$x +$x +$x +$x +$x +$x +$x +$x +$x +$x + … = 

Again infinity. This makes sens as the expectation is infinite, but careful: Infinity is not a number, it makes no sens to check equality to two infinite values.

What would you say that the value of the sum above is -x/2 ? This you get from the Zeta function: The sum

1+1+1+1+1+1+1+1+1+… = ζ(0) = -1/2 ( any positive number raised to the 0th power gives 1)

Multiply this by x and you get -x/2. Lets do this for the sum of the payout and we get -25cts = ζ(0) * 50cts.

A fair bet is when the expectation matches the wager. So we have -25cts = -x/2 and hence 50cts for the wager.

OK, this is rather shaky math, but does it perhaps make sense?

So we enter the game for 50cts. If Tails fall we get $1 and we go home with the good feeling that we doubled our money. If we don’t win (50% of the time), we have a shot at even higher winnings. Sounds good? It looks like we got into this game far too cheaply.

But we can look at the game in a different way: Actually we always win, but only if Tails falls we are allowed to take our winnings. In other cases we are playing with doubled stakes the game from the beginning. This is similar to the Gambler’s ruin scenario: Instead of taking our winnings, we are waging them again. The twist here is that the game could go on forever and the loss is never realized, but neither is the win. If the bank would offer “double?” if we win it would be obvious that we would play on forever, building an infinite bankroll without ever cashing in (the same principle is used in the Who wants to be a Millionaire TV show, but this game is finite).

If we now limit the game to 10 throws and the player loses if all come out Heads, the expectation is $5, so it makes sense to enter this price, because you can win up to $512. If now the game is limited to 100 throws it, the expectation is $50 and you could get very, very rich. But also you have to survive the first 5 throws to get into the money. In any case: You lose in the majority of games you play.

The interesting fact is that if the game is unlimited, the game degenerates back to the single throw case


Scala - end of an era

March 7th, 2014

I use Scala since 2006 - almost 8 years now. It was a bumpy ride in the beginning, but I had a lot of success with it, but now it seems that I ended up in swamp.

The IDE story with Scala was always difficult. It kind of worked somehow, but mixed compilation wasn’t ever anything for the faint of heart. This works now -SOMETIMES. Sometimes is the worst thing you can have in a professional environment. When you commit code that falls apart on a clean build for example. Or it builds fine clean, but shows unpredictable errors on other machines doing a refresh.

This wastes so much time that almost all productivity gains are lost. Add now terribly slow compiles, almost sure forced restarts after the most innocent fixes (unless the compiler doesn’t insert them into the running machine so that you fix them “twice” before realizing the fix is not executed ever) and the balance sheet becomes red.  

Try to find call sites - negative - 90% of the callers aren’t found. No wonder that there isn’t any refactoring worth mentioning. Coding Scala feels like coding C in vi in the early 90’s. Yes, I am old enough to tell.  Inspecting a variable in the debugger? Only in the variable view (oh, you are in a case - sorry, no variable bound). Inspecting an expression (the thing SOOO great with functional languages) - sorry, we don’t have that. I debug mostly with println now.  Early 90’s

While I was writing this, the Scala IDE processes the save of a single .scala file (1100 lines) and the “Building Workspace…” dropped from 73% (after staying there for 10 min) to 70%. No one laughs about Windows progress bars anymore. Needless to say that 100% is followed by 0% just one minute later.

So what is the problem?

One of the compiler hackers, Paul Phillips  gave some insights from the technical side

I think the problem is more rooted in the concept. Scala needs to know a lot of source code to infer types and verify that the types are consistent. In the end the time to compile will go with the complexity of the code. Note that this is not linear with the code-size, it might be easily worse. My experience is that it is at least a higher degree of polynomial: 1000 lines -fine, 10kloc - OK , 50kloc - wait, 8Mloc - write a long blog post.

It can be fast, Haskell is a living proof, but Scala aimed too high. Scala has classes and inheritance. I feel inheritance is a questionable feature that is inherited (unintended pun)  from the JVM. But Java and thus the JVM have a very “generic” type system. Now Scala comes with higher kinded types and whatever while the Java libraries always accept “Object”. This creates a constant mismatch between what Scala thinks a type is (or should be) and what the JVM can cope with. To compensate this, the Scala compiler needs to see through the code and this takes time … and more time.

And even more.

So where to go?

Clojure? Go? Dart?

All three are capable languages, but with the exception of Clojure they don’t integrate with Java code. Mlocs of Java are the reality. It is delusional to  write today a large system without reference to Java code (Googlers, I envy you, but you are not the norm). The typical systems today are huge Java codebases and are maintained incrementally. A one-shot migration is out of question. There are normally only incomplete auto-tests, so you have to migrate step by step to get feedback. And you have to apply them quickly: You can’t refactor for 12 months and then integrate. Real systems are a moving target, otherwise the migration is not worth the effort.

Clojure is a great language. My experience with it was mixed: Great coding (small scale), but when showing this to coworkers: WTF!

The main problem is the syntax. A bit more of “C” syntax wouldn’t hurt. I think the success of JavaScript is largely due to its simple C-style syntax that is in some weird sense “clear”. DSLs are the anti-thesis of this.

DSLs are a good indicator. Almost all good uses of them - as far as I have seen - is  to implement a kind of a meta-object protocol.

Good old CLOS… give me a better syntax and I’ll be all yours (as long as you run on the JVM)