Video Transcoding: Handbrake + libdvdcss

Suppose you had your collection of DVDs which you lawfully bought in the marketplace.
Suppose you wanted to see them when you are running in the gym, on your iPad. You need to transcode them, i. e. make the conversion into another format (in my case .mp4).

Surprisingly, this simple problem is not that tractable, unless you commit money to some expensive proprietary software solution. The issue is the Content Scramble System, a lawful encryption system that scrambles the contents of the DVD and messes up the reading of them.

A simple transcoder – an utility that transitions a format into a different one – may fail if the encryption is not taken care of.  And this is exactly what happens with a simple minded usage  of the marvellous free & open source transcoder Handbrake

handbrake

Handbrake needs the crucial library that inverts the scramble, libdvdcss.

Here is then the solution with respect to Handbrake 0.10.5.0 (64 bits) on Windows:
Download libdvdcss-2.dll from
http://download.videolan.org/libdvdcss/1.2.11/win32/libdvdcss-2.dll (32 bit version) or http://download.videolan.org/libdvdcss/1.2.11/win64/libdvdcss-2.dll (64 bit version).

Then, move the libdvdcss-2.dll into your Handbrake install directory (usually C:\Program Files\Handbrake\).
Enjoy transcoding!

Feed-forward networks and teleology

Bertrand Russell, in “History of Western Philosophy” pgg. 86-87, writes:

The atomists, unlike Socrates, Plato, and Aristotle, sought to explain the world without introducing the notion of purpose or final cause. The “final cause” of an occurrence is an event in the future for the sake of which the occurrence takes place. In human affairs, this conception is applicable. Why does the baker make bread? Because people will be hungry. Why are railways built? Because people will wish to travel. In such cases, things are explained by the purpose they serve. When we ask “why?” concerning an event, we may mean either of two things. We may mean: “What purpose did this event serve?” or we may mean: “What earlier circumstances caused this event?” The answer to the former question is a teleological explanation, or an explanation by final causes; the answer to the latter question is a mechanistic explanation. I do not see how it could have been known in advance which of these two questions science ought to ask, or whether it ought to ask both. But experience has shown that the mechanistic question leads to scientific knowledge, while the teleological question does not. The atomists asked the mechanistic question, and gave a mechanistic answer. Their successors, until the Renaissance, were more interested in the teleological question, and thus led science up a blind alley.
 

Is a feed-forward network (or any inverse problem) going to change this in a qualitative way? The mind goes again to Norbert Wiener, in his 1943 article “Behavior, purpose, Teleology”. McCulloch and Pitts, with their logical calculus of nervous system, were round the corner.

Market completion with Wishart variance

The following page documents the simulation happening in the model described here, i.e. the Wishart based stochastic volatility. The parameters for the Euler-Maruyama simulation are:

S_0 1.
T 1.
\Delta t 0.01.
d 2
\alpha 4.8

wishart_paths_20

 

wishart_paths_100

Example of Arbitrage free volatility surface (dynamics of vol surface) generated by the model (here x axis is tenor structure, y axis is strike level and z axis call option price):

The simulation of Wishart varinace process is accomplished via the following code:

function [S,eigS,timestep,minh]=Wishart(T,h,d,alpha,Q,M,s_0)
%
% Simulates the d-dimensional Wishart process on the interval [0,T]
% that follows the stochastic differential equation
% dS_t=sqrt{S_t}*dB_t*Q+Q'*dB_t'*sqrt{S_t}+(S_t*M+M'*S_t+alpha*Q'*Q)dt
% with initial condition S_0=s_0.
%
% Method of discretization: Euler-Maruyama
% Starting step size: h
% In order to guarantee positive semidefiniteness of S, the step size will be
% reduced iteratively if necessary.
%
% Output:
% S is a three dimensional array of the discretized Wishart process.
% eigS is a matrix consisting the eigenvalues of S
% timestep is the vector of all timesteps in [0,T]
% minh is the smallest step size used, i.e. minh=min(diff(timestep))
%--------------------------------------------------------------------
horg=h;
minh=h;
timestep=0;
[V_0,D_0]=eig(s_0);
eigS=sort(diag(D_0)');
% to ensure evolution in the right space
drift_fix=alpha*Q'*Q;


B_old=0;
B_inc=sqrt(h)*normrnd(0,1,d,d);
% not clear why he needs Q : the correlation structure?
vola=V_0*sqrt(D_0)*V_0'*B_inc*Q;
drift=s_0*M;
S_new = s_0+vola+vola'+(drift+drift'+drift_fix)*h;
[V_new,D_new]=eig(S_new);
eigS=[eigS;sort(diag(D_new)')];

% concatenate along third dimension
S=cat(3,s_0,S_new);
t=h;
timestep=[timestep;t];
flag=0;1
while t+h<T,
	B_old=B_old+B_inc;
	B_inc=sqrt(h)*normrnd(0,1,d,d);
	S_t=S_new; V_t=V_new; D_t=D_new;
	sqrtm_S_t=V_t*sqrt(D_t)*V_t';
	vola=sqrtm_S_t*B_inc*Q;
  %vola= V_t*sqrt(D_t)*V_t'*B_inc*Q;
	drift=S_t*M;
	S_new = S_t+vola+vola'+(drift+drift'+drift_fix)*h;
	[V_new,D_new]=eig(S_new);
	mineig=min(diag(D_new));
  
  % check whether the minimum eigenvalue is negative
	while mineig<0,
		h=h/2;
		minh=min(minh,h);
		flag=1;
		if h<eps, error('Step size converges to zero'), return, end
		B_inc=0.5*B_inc+sqrt(h/2)*normrnd(0,1,d,d);
		vola=sqrtm_S_t*B_inc*Q;
		drift=S_t*M;
		S_new = S_t+vola+vola'+(drift+drift'+drift_fix)*h;
		mineig=min(eig(S_new));
	end
	if flag==0,
		eigS=[eigS;sort(diag(D_new)')];
		S=cat(3,S,S_new);
		t=t+h;
		timestep=[timestep;t];
	end
	if flag==1,
		[V_new,D_new]=eig(S_new);
		eigS=[eigS;sort(diag(D_new)')];
		S=cat(3,S,S_new);
		t=t+h;
		timestep=[timestep;t];
		flag=0;
		h=horg;
	end
end
h_end=T-t;
if h_end>0,
	B_inc=sqrt(h_end)*normrnd(0,1,d,d);
	S_t=S_new; V_t=V_new; D_t=D_new;
	vola=V_t*sqrt(D_t)*V_t'*B_inc*Q;
	drift=S_t*M;
	S_new = S_t+vola+vola'+(drift+drift'+drift_fix)*h;
	S=cat(3,S,S_new);
	eigS=[eigS;sort(eig(S_new)')];
	timestep=[timestep;T];
end

When that is done, this simulates the a single Euler-Maruyama realization of a path:

function [X]=EulerSinglePath(T,h,S_0)
% simulation of Wishart based process
% parameters are drawn from Gauthier and Possamai paper 
% Efficient Simulation 

% dimension of the Wishart variance process
d = 2;
alpha =4.8;
% initial log spot
X_0 = log(S_0);

% Variance initial level
s_0 = [0.5,  0.; 0., 0.5];
% drift correction
Q   = [0.35, 0.; 0., 0.4];
% drift component
M   = [-0.6, 0.; 0., -0.4];
% correlation
R   = [-0.5, 0.; 0.,-0.5];

% Simulation of wishart variance
[S,eigS,timestep,minh] = Wishart(T,h,d,alpha,Q,M,s_0);

 % after simulation of matrix valued process, simulation of asset is given here

% precomputing the correlation structure
IminRR =  eye(d,d)-R*R';
[V_ImRR,D_ImRR]=eig(IminRR);
sqImR=V_ImRR*sqrt(D_ImRR)*V_ImRR';


% ciclyng in the third dimension of the matrix
n_steps = size(S)(3);
X = zeros(1, n_steps);
% setting the initial log asset level
X(1) = X_0;

for n =2:n_steps
  TrS_t = trace(S(:,:,n));
  drift = -0.5*TrS_t*h;
  
  % VOLATILITY
  [V_t,D_t]=eig(S(:,:,n));
  % the formula below corresponds to
  % Z_t := W_tR^T + B_t \sqrt{I_d − RR^T}
  W_t=sqrt(h)*normrnd(0,1,d,d);
  B_t=sqrt(h)*normrnd(0,1,d,d);
  Z  = W_t * R' + B_t * sqImR;
  
  vola=V_t*sqrt(D_t)*V_t'*Z;
  
  X_old = X(n-1);
  X(n) = X_old + drift + trace(vola);
  X_old=X(n);
end

 


NOTES


Rogers, Tehranchi, Can the implied volatility surface move by parallel shifts?

Octave meshgrid, surf and video slideshow

Since the release of the splendid GUI, Octave has become again one of my favorite tools. Here is a simple and hopefully useful application. The following code is quite self-explanatory:

tx = ty = linspace (-4, 4, 41)';
[xx, yy] = meshgrid (tx, ty);
for ii=1:23
  alpha = 0.01*ii;
  tz = exp(-alpha*(xx .^2 + yy .^2 ));
  surf(tx, ty, tz);
  tit = strcat ("alpha=", num2str(alpha));
  title(tit);
  fname = sprintf("anim_%02i.jpg", ii);
  print (fname)
  ans = yes_or_no ("prompt")
endfor

 

After that, having installed ffmpeg, just issue from either bash or cmd shell:

ffmpeg.exe  -framerate 2 -start_number 1 -i "anim_%02d.jpg" -vcodec mpeg4 evolution.mp4

 

to obtain:

 

Ipython: simple finite differences

Ipython as a prototyping tool for scientific computation is very neat and useful indeed.
After the Win 8 minimal installation reviewed in a previous post, now we are going for some simple Crank Nicolson of a parabolic PDE. We are going to have non-degenerate diffusions, as we don’t want the problem to become hyperbolic and to lose parabolic smoothing.

To run the notebook, one simply needs some more libraries, “numpy” and “matplotlib”.
They can be obtained as precompiled “wheel” files courtesy of Christoph Gohlke
from here.

After the relevant wheel files have been downloaded, simply run cmd (as administrator) and issue
from the directory where they have been saved:

pip install "numpy_my_version.whl"
pip install "matplotlib_my_version.whl"

(where of course the suffix “my_version” has to be changed into whatever the version is).
This zip contains a modified version of some buggy notebook found over the internet. It is now fully working in Python >= 3.x.
After uploading in the usual way, run it. The Finite Difference stencil is easily computed, and the heatmap is nicely displayed. Latex is there as well to document the mathematical steps.
Literate Programming at its best.

Ipython in Windows 8: “Hello World”

This post documents the installation of Ipython in Windows 8.
As an example of Don Knuth’s Literate Programming, Ipython is simply great.
One can devise the mathematical equations of a model, code the numerics and run the program against data.The full lifecycle of science, in a single sheet.

1) Install the “Python” runtime. Version 3.x is recommended, from this link.

2) Suppose that the version has been installed in: “C:\Python3.4”. Add this directory as well as “C:\Python3.4\Scripts” to the system path

3) Save this file in the same directory “C:\Python3.4”, and running cmd as administrator, from a shell issue

python ez_setup.py

4) A C compiler is needed to compile extensions. Among various choices this is the simplest:

5 ) Create a vcvars64.bat file in C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\amd64 that contains :

CALL "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /x64

6)  Issue as admin

easy_install ipython[all]

then

pip install markupsafe

Finally, run it

 
ipython notebook

After that, the page

"http://localhost:8888/tree"

will open in the browser and  under the heading Files->Upload

unzip this minimal ipython notebook and load it. The first ipython program is up: it can be executed and modified intereactively.

Science Friction (1)

Mersenne numbers, when they are primes, are routinely used to feed MonteCarlo random number generation. They have the form:

M_n := 2^{n}-1, \hspace{0.5cm} n \in \mathbb{N}.

Since not all of them are prime, we can use prime factorization of a sequence of big Mersenne numbers to benchmark our CPU.
The ingredients are fairly minimalistic:
all we need is a Linux bash shell (with the wonderful bench calculator bc), the java compiler (Linux: javac) and the java virtual machine (Linux: java). To factorize we could use for example the Pollard Rho method in this implementation.

import java.math.BigInteger;
import java.security.SecureRandom;
  

class PollardRho {
    private final static BigInteger ZERO = new BigInteger("0");
    private final static BigInteger ONE  = new BigInteger("1");
    private final static BigInteger TWO  = new BigInteger("2");
    private final static SecureRandom random = new SecureRandom();

    public static BigInteger rho(BigInteger N) {
        BigInteger divisor;
        BigInteger c  = new BigInteger(N.bitLength(), random);
        BigInteger x  = new BigInteger(N.bitLength(), random);
        BigInteger xx = x;

        // check divisibility by 2
        if (N.mod(TWO).compareTo(ZERO) == 0) return TWO;

        do {
            x  =  x.multiply(x).mod(N).add(c).mod(N);
            xx = xx.multiply(xx).mod(N).add(c).mod(N);
            xx = xx.multiply(xx).mod(N).add(c).mod(N);
            divisor = x.subtract(xx).gcd(N);
        } while((divisor.compareTo(ONE)) == 0);

        return divisor;
    }

    public static void factor(BigInteger N) {
        if (N.compareTo(ONE) == 0) return;
        if (N.isProbablePrime(20)) 
            { System.out.println(N); return; }
        BigInteger divisor = rho(N);
        factor(divisor);
        factor(N.divide(divisor));
    }

 
    public static void main(String[] args) {
        BigInteger N = new BigInteger(args[0]);
        factor(N);
    }
}

If processor time has to be accurately recorded, just add the “time” command in front of the java runtime, like this:

for n in `seq 101 2 201`; \
do echo "[Factoring Mersenne `echo 2^$n-1`]" ; \
time java PollardRho `echo 2^$n -1 |bc` ; \
done

The code snippet above loops all the odd numbers n \in \{101,103,\ldots,199,201\}, uses them as Mersenne exponent as M_n := 2^n-1 and prints prime factors. Check for example that for n=107 the Mersenne number is infact a Mersenne prime.