My research interests are robust prediction and optimization in finance and "big data" (e.g. advertisement).
We propose a new statistical method to improve reliability of state-of-the-art methods used in research and in the industry for prediction, portfolio optimization, or risk assessment. Our information theoretic method is documented in the "Parameter-free inference", with Professor David G. Luenberger, 2012.
The impact of estimation errors of nonlinear and nonparametric methods is well put in recent publications:
"The literature is difficult to absorb. Different papers use different techniques, variables, and time periods. Results from papers that were written years ago may change when more recent data is used. Some papers contradict the findings of others. Still, most readers are left with the impression that 'prediction [of the equity premium] works' - though it is unclear exactly what works." Goyal/Welch, Review of Financial Studies, 2007.
"We evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1/N portfolio. ... none [of the stat-of-the-art portfolio optimization methods] is consistently better than the 1/N rule in terms of Sharpe ratio, certainty-equivalent return, [because of estimation errors]. This suggests that there are still many 'miles to go' before the gains promised by optimal portfolio choice can actually be realized." DeMiguel/Garlappi/Uppal, Review of Financial Studies, 2007.
In order to overcome statistical illusion, we propose a new inference method which does not involve any estimated/tuned parameter, while no assumptions on underlying probability distributions or stochastic processes are made.We present a parameter-free density estimator which is more accurate in finite samples than competitive estimation schemes like kernel density estimation with adaptive bandwidths. This is the core of deriving a parameter-free prediction method which does not need assumptions on the underlying data generating process. Numerical experiments show that benchmark prediction methods are out-performed for different data sets. Based on the Law of Small Numbers a stochastic optimization method is proposed which reveals fast convergence to optimal portfolios in small samples. An accurate measure of (Conditional) Value-at-Risk is obtained based on the Central Sample Theorem. Our new method is also a valuable tool in "big data" tasks, since our universal prediction and optimization method is convex, thus computationally very tractable.