Friday, December 1, 2017

Machine Learning

Machine Learning

    Machine Learning is to generalize.

Classic Problem
    Normal Programming: "Hello world"
    Machine Learning:  MNIST

    Problems --> Tools--->Metrics  (apply to all problems?)
    Data to generalize --> Use different algorithms --> Monitor performance of algorithms and adjust

Key Words
        Discrete output
        Continuous numeric output
     Gradient descent, Backpropagation, Cost function,
           Any loss consisting of a negative log-likelihood between the empirical distribution
           defined by the training set and the probability distribution defined by model. For example,
           Mean Squared Error: cross-entropy between empirical distribution and a Gaussian model

     Activation function
           Step function
                discrete 0, 1
           Sigmoid function
           Tanh function

           Rectified Linear function (ReLU)

           Exponential linear unit (ELU)

     Training data set
           Train parameter
     Validation data set
           Train Hyperparameter
     Test data set
     Bias, Variance
         Linked to capacity, underfitting, overfitting

     Closed-form solution

     Weight, Bias, Learning rate
           for example: Learning rate

    Kernel trick
    Maximum likelihood estimation
             Point estimate of variables

     Bayesian estimation
             Full distribution of variables
         Hill Climbing
              One step along axis one time
              Achieve Optimal solution for Convex problem
              Problems: local maxima, ridges and alleys, plateau
              Good for function complex and/or not differentiable

         Gradient Descent
             Vanishing/exploding gradients problems
             approaches to solve: He initialization, Batch Normalization




        Modification to ML algorithms, intending to reduce generalization error, not training error
        Example: weight decay for linear regression
        Early stoppping, L1, L2, Dropout, Max-Norm, Data Augmentation

            To have small gap between training error and test error
     Supervised Learning
            features + labels
            Nonprobabilistic SL
                  K-Nearest Neighbor
             Decision Tree
     Unsupervised Learning
            features without labels
     Reinforcement Learning
             Learning by getting feedback from the environment

         Modify or filter data before feeding it to learning algorithms
         Feature selection
         Feature extraction
         Dimension reduction (PCA, manifold learning)
         Kernel approximation

    Cross-validation schemes
         Stratified K-fold
         Leave-one-out (small amount of data)

    Dimension Reduction
Math behind ML

     Classification, Regression, Clustering, Dimension deduction

    Linear Regression
        Find optimal weights by solving normal equations

    Logistic Regression
         No closed-form solution. Maximizing the log-likelihood, or minimizing the negative log-likelihood using gradient descent.

    Neural Network
    RNN (Recurrent Neural Network)
    CNN (Convolutional Neural Network)

    Decision Tree

    Identification Tree

    Naive Bayes
           Features independent of each other
           Conditional Probability Model
           Highly scalable, only requires small amount of training data
           Linear Performance Time
           Generally outperformed by other algorithms, SVM...

    Support Vector Machines
           For both classification and regression
           Widest street to separate instances of different classes

    Random Forest
         Decision Tree ensemble

Test Methodologies
   Leave one out   LOO
       for small amount of data

   Data split (80/20)
Practical Guidelines for DNN
   Initialization                        He
   Activation                           ELU
   Normalization                     Batch Normalization
   Regularization                    Dropout
   Optimizer                           Adam
   Learning Rate Schedule     None

    Tensorflow, Scikit-learn
    Spark MLLib, Spark ML,  Weka,

Use cases
    Linear Regression
          House size---> House price in a community
    Naive Bayes
          Document classification: separate legitimate emails from spam emails
          For example, based on key words: cheap, free
    When to use which algorithm(s)?

Classic Applications
       Alphago vs Lee Sedol

       Netflix movie recommendations

       Image recognitions

       Natural language processing

     No ML algorithm is universally better than any other algorithm.
     Understand data distribution, and pick proper algorithm(s).

          (One of my favorite books, highly recommended)
           Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

           Deep Learning (Adaptive Computation and Machine Learning series)




    Machine learning series from Luis Serrano  (Very good explanations for beginners)

    (AWS machine learning service)

    (Spark MLlib example)


Tuesday, January 31, 2017

String valueOf() pitfalls

What will the console output of this program?

public class TestStringValueOf {

public static void main(String[] args) {

      public static void testStringValueOfChar() {
char a = 'a';
String str1 = String.valueOf(a);
String str2 = String.valueOf(a);
System.out.println("char comparison:" + (str1 == str2));

double d = 12.3d;
String str3 = String.valueOf(d);
String str4 = String.valueOf(d);
System.out.println("double comparison:" + (str3 == str4));

boolean b = false;
String str5 = String.valueOf(b);
String str6 = String.valueOf(b);
System.out.println("boolean comparison:" + (str5 == str6));

Object o = null;
String str7 = String.valueOf(o);
String str8 = String.valueOf(o);
System.out.println("Object null comparison:" + (str7 == str8));

Object notNull = new Object();
String str9 = String.valueOf(notNull);
String str10 = String.valueOf(notNull);
System.out.println("Object Not null comparison:" + (str9 == str10));

see the end of this article for the output.

Overall, the string comparison should use 'equals' no matter how String objects were created.

-------console output----------

char comparison:false
double comparison:false
boolean comparison:true
Object null comparison:true
Object Not null comparison:false

Monday, July 18, 2016

Spring MVC UTF-8

Key points


Maven pom.xml:

   <%@ page language="java" pageEncoding="UTF-8"%>
  <%@ page contentType="text/html;charset=UTF-8" %>

Friday, June 17, 2016

Compile xsl files and store in cache to improve XSLT performance

Common code found online to do XSLT transformation. (removed non essential pieces for brevity)

TransformerFactory transformerFactory = TransformerFactory.newInstance();
Transformer transformer = transformerFactory.newTransformer(new StreamSource(new File(xsltPath)));
transformer.transform(new StreamSource(new File(sourceFilePath)), new StreamResult(new File(resultPath)));

The code works. But if a xslt file is relatively big and  
needs to be used over and over again to transform 
a lot of files, for example, in the batch mode, 
it may not perform well. 

The following shows a way to cache the compiled version of an xsl file, which is a 'Templates' object. This object is thread safe.

Code snippet to cache the 'Templates' object.

static final Map<String, Templates> cacheTemplates = new ConcurrentHashMap<String, Templates>();

       static TransformerFactory transformFactory = null;

       static {

     private static void init() {
         try {
             transformFactory =TransformerFactory.newInstance();
        catch(Exception e) {
            throw new RuntimeException(e);

     public static void cacheCompiled( String xsl) {
File file = null;
  StreamSource source= null;
                Templates  templates = null;
try {
file = new File( xsl);
source = new StreamSource(file);
                         templates = transformFactory.newTemplates(source); //create this once for a file, save in a cache.
cacheTemplates .put(xsl, templates );
} catch (Exception e) {
throw new RuntimeException(e);
} finally {

The above 'templates' object is basically a coompiled version of the original xsl file.  If the original file is relatively big, for example, 20KB, it takes more than 2 seconds on my local machine to transform a small file.  Without caching the templates, it takes more than 2 seconds every time.  With caching,  it takes about 0.1 seconds  for every transformation after the first time.

The basic code is like this:

//get the Templates object from cache based on the xsl file name, then get a Transformer object

Transformer transformer = templates.newTransformer();

transformer.transform(new StreamSource(new File(sourceFilePath)),
new StreamResult(new File(resultPath)));

The 'transformer' object mentioned above is not thread safe.

The SAXON parser seems becoming more popular, and the Xalan parser seems fading away.

The home edition of the SAXON parser, which is free, may be good enough for a lot of applications.

Friday, November 15, 2013

First impressions on open source ESBs

Used commercial ESB and BPMs for a couple of years, recently had a chance to evaluate some open source ESBs.

WSO2:  not easy to use, had difficulty even making the sample projects to work. No DataMapper tool, which is a big no-no to my projects.

Mulesoft ESB:  Nice documentation, instructions easy to follow, sample projects can be built and run in a couple of minutes, nice DataMapper tool in the 3.4 version.   Have not had a chance to build a relatively complex application using this.  Not sure whether the community edition is good enough to be used in the Production.

Monday, September 2, 2013

String getBytes could lead to difficult bugs

If you execute the following function,  what do you think should be the size of the 'def' byte array?

The logic is really simple: an input as byte array that have two elements, then create a string out of this with 'UTF-8' encoding, then create another byte array using this string with the same UTF-8 encoding.

public static void testStringUTF8() {
byte[] abc = new byte[2];
abc[0] = 31;
abc[1] = -117;

try {
String stringAbc = new String(abc, "UTF-8");
byte[] def = stringAbc.getBytes("UTF-8");
if (def != null) {
System.out.println("size of output byte array:" + def.length);  //print the array size

System.out.println(def[1]);  //print the second element of the output byte array

System.out.println(abc[1]); //print the second element of the input byte array

} catch (Exception e) {


Wednesday, April 11, 2012

How to invoke local EJB session beans in WebLogic

Sometimes you may have a need to invoke a LOCAL EJB session beans in a normal java class, for example, Business Delegate class, you can use ServiceLocator to locate a local EJB session bean proxy by JNDI name. Even though it is relatively easy to do so for a REMOTE EJB session bean by using the value of  'name' or 'mappedName' in the bean class definition, it is a little tricky for LOCAL session beans.

Here is what you need to do.

For exampe:

Here is an interface:


public interface PlayFacadeInf {
     public void play(String var);

Here is the implementation bean class.


public class  PlayFacadeImpl implements  PlayFacadeInf {
     public void play(String var) {
          // somthing

Here is the part of the ejb-jar.xml

display-name>myEJB </display-name>
<ejb-name> PlayFacadeImpl</ejb-name>

Here is part of web.xml


Here is part of the

private static InitialContext ctx = null;
static {
try {
ctx = new InitialContext();
catch (NamingException e) {
//... throw some exception

private static InitialContext getInitialContext() throws NamingException{
return ctx;

public static  PlayFacadeInf  getPlayFacade() throws NamingException {

PlayFacadeInf     playFacadeInf   = null;

playFacadeInf     = ( PlayFacadeInf )            

return  playFacadeInf;

Then any normal java class can use the ServiceLocator to get hold of the local ejb session bean proxy.