computersciencegeeks.wordpress.com
Interview Questions & Puzzles | Computer Science Geeks
https://computersciencegeeks.wordpress.com/2013/05/20/interview-questions-puzzles
If you can dream it, you can do it. Walt Disney. SQL – Part 1. What Changed My Life. Are You Optimistic Person. Interview Questions and Puzzles. How To Read A Book. Now I am 22. Are You Optimistic Person? How To Read A Book? Interview Questions & Puzzles. Today, I want to share with all of you my little experience in Interview Questions for Software Engineer position . and also some interesting puzzles. Let’s start with questions. What’s the 4 major OOP Principles? What’s abstract class? I have a SQL tab...
computersciencegeeks.wordpress.com
Two Years Passed :) | Computer Science Geeks
https://computersciencegeeks.wordpress.com/2013/07/01/two-years-passed
If you can dream it, you can do it. Walt Disney. SQL – Part 1. What Changed My Life. Are You Optimistic Person. Interview Questions and Puzzles. How To Read A Book. Now I am 22. Now I am 22 →. Two Years Passed :). Today 01-07-2013 is the Anniversary. Of my lovely Blog ღ. I have been Blogging for 2 years🙂. Mmm, I have big dreams to achieve and write about it in you in the coming years . I want to write about Android. And other things related to Mobile Development and Mobile Game. Posted by Esraa Ibrahim.
almoststochastic.com
almost stochastic: Batch MLE for the GARCH(1,1) model
http://www.almoststochastic.com/2014/06/batch-mle-for-garch11-model.html
A blog on probability. Batch MLE for the GARCH(1,1) model. In this post, we derive the batch MLE procedure for the GARCH model in a more principled way than the last GARCH post. The derivation presented here is simple and concise. There are some stability constraints for this model: begin{align*} alpha beta leq 1, , , alpha geq 0 , , beta geq 0 , , c 0 end{align*}. Batch MLE for GARCH). Read $x {1:n}$ and initialize $ theta 1$. Simulate $v {1:n} { theta j}$:. Batch MLE for the GARCH(1,1) model.
almoststochastic.com
almost stochastic: August 2013
http://www.almoststochastic.com/2013_08_01_archive.html
A blog on probability. In this post, we review the sequential importance sampling-resampling for state space models. These algorithms are also known as particle filters. We give a derivation of these filters and their application to the general state space models. Labels: monte carlo methods. Math ∩ Programming. There are some enterprises in which a careful disorderliness is the true method." (Moby Dick, chapter 82). By Ömer Deniz Akyıldız.
newtonexcelbach.wordpress.com
Using Array Functions and UDFs | Newton Excel Bach, not (just) an Excel Blog
https://newtonexcelbach.wordpress.com/using-array-functions-and-udfs-and-following-links
Newton Excel Bach, not (just) an Excel Blog. An Excel blog for engineers and scientists, and an engineering and science blog for Excel users. About Newton Excel Bach. Using Array Functions and UDFs. Using Array Functions and UDFs. Using UDFs Continuous Beam Example. Using Array Functions and UDFs. Pingback: Beam actions and deflections, 3D or 2D Newton Excel Bach, not (just) an Excel Blog. Pingback: Dynamic sorting with Excel, VBA, and Python Newton Excel Bach, not (just) an Excel Blog. Pingback: More on...
almoststochastic.com
almost stochastic: Probabilistic models of nonnegative matrix factorisation
http://www.almoststochastic.com/2014/07/probabilistic-models-of-nonnegative.html
A blog on probability. Probabilistic models of nonnegative matrix factorisation. I wrote this post last year. I thought it is good to publish this here. Here, we give a brief review of probabilistic models of nonnegative matrix factorisation (NMF). We mainly list the papers which are important to gain intuition and sketch the main ideas without too much mathematical detail. Probabilistic models of nonnegative matrix factorization. 5(2):111–126, June 1994. 401(6755):788–791, October 1999. 5] C Fevotte and...
almoststochastic.com
almost stochastic: Sequential importance sampling-resampling
http://www.almoststochastic.com/2013/08/sequential-importance-sampling.html
A blog on probability. In this post, we review the sequential importance sampling-resampling for state space models. These algorithms are also known as particle filters. We give a derivation of these filters and their application to the general state space models. In this previous post. As we will use the sequential importance sampling (SIS) algorithms for hidden Markov models (HMMs), we define them in a nutshell. Note that by an HMM, we mean a general state-space model. At time $n = 1$. Approximate the ...
almoststochastic.com
almost stochastic: January 2014
http://www.almoststochastic.com/2014_01_01_archive.html
A blog on probability. Convergence of gradient descent algorithms. In this post, I review the convergence proofs of gradient algorithms. Our main reference is: Leon Bottou, Online learning and stochastic approximations. I rewrite the proofs described in Bottous paper but with more details about the points which are subtle to me. I tried to write the proofs as clear as possible so as to make them accessible to everyone. Convergence of gradient descent algorithms. Math ∩ Programming. By Ömer Deniz Akyıldız.
almoststochastic.com
almost stochastic: November 2013
http://www.almoststochastic.com/2013_11_01_archive.html
A blog on probability. Fatou's lemma and monotone convergence theorem. In this post, we deduce Fatous lemma and monotone convergence theorem (MCT) from each other. Young's, Hölder's and Minkowski's Inequalities. In this post, we prove Youngs, Holders and Minkowskis inequalities with full details. We prove Hölders inequality using Youngs inequality. Then we prove Minkowskis inequality by using Hölder. Fatous lemma and monotone convergence theorem. Youngs, Hölders and Minkowskis Inequalities.
SOCIAL ENGAGEMENT