Video Transcoding: Handbrake + libdvdcss

Suppose you had your collection of DVDs which you lawfully bought in the marketplace.
Suppose you wanted to see them when you are running in the gym, on your iPad. You need to transcode them, i. e. make the conversion into another format (in my case .mp4).

Surprisingly, this simple problem is not that tractable, unless you commit money to some expensive proprietary software solution. The issue is the Content Scramble System, a lawful encryption system that scrambles the contents of the DVD and messes up the reading of them.

A simple transcoder – an utility that transitions a format into a different one – may fail if the encryption is not taken care of.  And this is exactly what happens with a simple minded usage  of the marvellous free & open source transcoder Handbrake


Handbrake needs the crucial library that inverts the scramble, libdvdcss.

Here is then the solution with respect to Handbrake (64 bits) on Windows:
Download libdvdcss-2.dll from (32 bit version) or (64 bit version).

Then, move the libdvdcss-2.dll into your Handbrake install directory (usually C:\Program Files\Handbrake\).
Enjoy transcoding!

Feed-forward networks and teleology

Bertrand Russell, in “History of Western Philosophy” pgg. 86-87, writes:

The atomists, unlike Socrates, Plato, and Aristotle, sought to explain the world without introducing the notion of purpose or final cause. The “final cause” of an occurrence is an event in the future for the sake of which the occurrence takes place. In human affairs, this conception is applicable. Why does the baker make bread? Because people will be hungry. Why are railways built? Because people will wish to travel. In such cases, things are explained by the purpose they serve. When we ask “why?” concerning an event, we may mean either of two things. We may mean: “What purpose did this event serve?” or we may mean: “What earlier circumstances caused this event?” The answer to the former question is a teleological explanation, or an explanation by final causes; the answer to the latter question is a mechanistic explanation. I do not see how it could have been known in advance which of these two questions science ought to ask, or whether it ought to ask both. But experience has shown that the mechanistic question leads to scientific knowledge, while the teleological question does not. The atomists asked the mechanistic question, and gave a mechanistic answer. Their successors, until the Renaissance, were more interested in the teleological question, and thus led science up a blind alley.

Is a feed-forward network (or any inverse problem) going to change this in a qualitative way? The mind goes again to Norbert Wiener, in his 1943 article “Behavior, purpose, Teleology”. McCulloch and Pitts, with their logical calculus of nervous system, were round the corner.

Market completion with Wishart variance

The following page documents the simulation happening in the model described here, i.e. the Wishart based stochastic volatility. The parameters for the Euler-Maruyama simulation are:

S_0 1.
T 1.
\Delta t 0.01.
d 2
\alpha 4.8




Example of Arbitrage free volatility surface (dynamics of vol surface) generated by the model (here x axis is tenor structure, y axis is strike level and z axis call option price):

The simulation of Wishart varinace process is accomplished via the following code:

function [S,eigS,timestep,minh]=Wishart(T,h,d,alpha,Q,M,s_0)
% Simulates the d-dimensional Wishart process on the interval [0,T]
% that follows the stochastic differential equation
% dS_t=sqrt{S_t}*dB_t*Q+Q'*dB_t'*sqrt{S_t}+(S_t*M+M'*S_t+alpha*Q'*Q)dt
% with initial condition S_0=s_0.
% Method of discretization: Euler-Maruyama
% Starting step size: h
% In order to guarantee positive semidefiniteness of S, the step size will be
% reduced iteratively if necessary.
% Output:
% S is a three dimensional array of the discretized Wishart process.
% eigS is a matrix consisting the eigenvalues of S
% timestep is the vector of all timesteps in [0,T]
% minh is the smallest step size used, i.e. minh=min(diff(timestep))
% to ensure evolution in the right space

% not clear why he needs Q : the correlation structure?
S_new = s_0+vola+vola'+(drift+drift'+drift_fix)*h;

% concatenate along third dimension
while t+h<T,
	S_t=S_new; V_t=V_new; D_t=D_new;
  %vola= V_t*sqrt(D_t)*V_t'*B_inc*Q;
	S_new = S_t+vola+vola'+(drift+drift'+drift_fix)*h;
  % check whether the minimum eigenvalue is negative
	while mineig<0,
		if h<eps, error('Step size converges to zero'), return, end
		S_new = S_t+vola+vola'+(drift+drift'+drift_fix)*h;
	if flag==0,
	if flag==1,
if h_end>0,
	S_t=S_new; V_t=V_new; D_t=D_new;
	S_new = S_t+vola+vola'+(drift+drift'+drift_fix)*h;

When that is done, this simulates the a single Euler-Maruyama realization of a path:

function [X]=EulerSinglePath(T,h,S_0)
% simulation of Wishart based process
% parameters are drawn from Gauthier and Possamai paper 
% Efficient Simulation 

% dimension of the Wishart variance process
d = 2;
alpha =4.8;
% initial log spot
X_0 = log(S_0);

% Variance initial level
s_0 = [0.5,  0.; 0., 0.5];
% drift correction
Q   = [0.35, 0.; 0., 0.4];
% drift component
M   = [-0.6, 0.; 0., -0.4];
% correlation
R   = [-0.5, 0.; 0.,-0.5];

% Simulation of wishart variance
[S,eigS,timestep,minh] = Wishart(T,h,d,alpha,Q,M,s_0);

 % after simulation of matrix valued process, simulation of asset is given here

% precomputing the correlation structure
IminRR =  eye(d,d)-R*R';

% ciclyng in the third dimension of the matrix
n_steps = size(S)(3);
X = zeros(1, n_steps);
% setting the initial log asset level
X(1) = X_0;

for n =2:n_steps
  TrS_t = trace(S(:,:,n));
  drift = -0.5*TrS_t*h;
  % the formula below corresponds to
  % Z_t := W_tR^T + B_t \sqrt{I_d − RR^T}
  Z  = W_t * R' + B_t * sqImR;
  X_old = X(n-1);
  X(n) = X_old + drift + trace(vola);



Rogers, Tehranchi, Can the implied volatility surface move by parallel shifts?

Octave meshgrid, surf and video slideshow

Since the release of the splendid GUI, Octave has become again one of my favorite tools. Here is a simple and hopefully useful application. The following code is quite self-explanatory:

tx = ty = linspace (-4, 4, 41)';
[xx, yy] = meshgrid (tx, ty);
for ii=1:23
  alpha = 0.01*ii;
  tz = exp(-alpha*(xx .^2 + yy .^2 ));
  surf(tx, ty, tz);
  tit = strcat ("alpha=", num2str(alpha));
  fname = sprintf("anim_%02i.jpg", ii);
  print (fname)
  ans = yes_or_no ("prompt")


After that, having installed ffmpeg, just issue from either bash or cmd shell:

ffmpeg.exe  -framerate 2 -start_number 1 -i "anim_%02d.jpg" -vcodec mpeg4 evolution.mp4


to obtain:


Ipython: simple finite differences

Ipython as a prototyping tool for scientific computation is very neat and useful indeed.
After the Win 8 minimal installation reviewed in a previous post, now we are going for some simple Crank Nicolson of a parabolic PDE. We are going to have non-degenerate diffusions, as we don’t want the problem to become hyperbolic and to lose parabolic smoothing.

To run the notebook, one simply needs some more libraries, “numpy” and “matplotlib”.
They can be obtained as precompiled “wheel” files courtesy of Christoph Gohlke
from here.

After the relevant wheel files have been downloaded, simply run cmd (as administrator) and issue
from the directory where they have been saved:

pip install "numpy_my_version.whl"
pip install "matplotlib_my_version.whl"

(where of course the suffix “my_version” has to be changed into whatever the version is).
This zip contains a modified version of some buggy notebook found over the internet. It is now fully working in Python >= 3.x.
After uploading in the usual way, run it. The Finite Difference stencil is easily computed, and the heatmap is nicely displayed. Latex is there as well to document the mathematical steps.
Literate Programming at its best.

Ipython in Windows 8: “Hello World”

This post documents the installation of Ipython in Windows 8.
As an example of Don Knuth’s Literate Programming, Ipython is simply great.
One can devise the mathematical equations of a model, code the numerics and run the program against data.The full lifecycle of science, in a single sheet.

1) Install the “Python” runtime. Version 3.x is recommended, from this link.

2) Suppose that the version has been installed in: “C:\Python3.4”. Add this directory as well as “C:\Python3.4\Scripts” to the system path

3) Save this file in the same directory “C:\Python3.4”, and running cmd as administrator, from a shell issue


4) A C compiler is needed to compile extensions. Among various choices this is the simplest:

5 ) Create a vcvars64.bat file in C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\amd64 that contains :

CALL "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /x64

6)  Issue as admin

easy_install ipython[all]


pip install markupsafe

Finally, run it

ipython notebook

After that, the page


will open in the browser and  under the heading Files->Upload

unzip this minimal ipython notebook and load it. The first ipython program is up: it can be executed and modified intereactively.

Science Friction (1)

Mersenne numbers, when they are primes, are routinely used to feed MonteCarlo random number generation. They have the form:

M_n := 2^{n}-1, \hspace{0.5cm} n \in \mathbb{N}.

Since not all of them are prime, we can use prime factorization of a sequence of big Mersenne numbers to benchmark our CPU.
The ingredients are fairly minimalistic:
all we need is a Linux bash shell (with the wonderful bench calculator bc), the java compiler (Linux: javac) and the java virtual machine (Linux: java). To factorize we could use for example the Pollard Rho method in this implementation.

import java.math.BigInteger;

class PollardRho {
    private final static BigInteger ZERO = new BigInteger("0");
    private final static BigInteger ONE  = new BigInteger("1");
    private final static BigInteger TWO  = new BigInteger("2");
    private final static SecureRandom random = new SecureRandom();

    public static BigInteger rho(BigInteger N) {
        BigInteger divisor;
        BigInteger c  = new BigInteger(N.bitLength(), random);
        BigInteger x  = new BigInteger(N.bitLength(), random);
        BigInteger xx = x;

        // check divisibility by 2
        if (N.mod(TWO).compareTo(ZERO) == 0) return TWO;

        do {
            x  =  x.multiply(x).mod(N).add(c).mod(N);
            xx = xx.multiply(xx).mod(N).add(c).mod(N);
            xx = xx.multiply(xx).mod(N).add(c).mod(N);
            divisor = x.subtract(xx).gcd(N);
        } while((divisor.compareTo(ONE)) == 0);

        return divisor;

    public static void factor(BigInteger N) {
        if (N.compareTo(ONE) == 0) return;
        if (N.isProbablePrime(20)) 
            { System.out.println(N); return; }
        BigInteger divisor = rho(N);

    public static void main(String[] args) {
        BigInteger N = new BigInteger(args[0]);

If processor time has to be accurately recorded, just add the “time” command in front of the java runtime, like this:

for n in `seq 101 2 201`; \
do echo "[Factoring Mersenne `echo 2^$n-1`]" ; \
time java PollardRho `echo 2^$n -1 |bc` ; \

The code snippet above loops all the odd numbers n \in \{101,103,\ldots,199,201\}, uses them as Mersenne exponent as M_n := 2^n-1 and prints prime factors. Check for example that for n=107 the Mersenne number is infact a Mersenne prime.