One Size Does Not Fit All

What data mining and machine learning tasks have in common: extract models from data. But the purpose varies from analytics, over predictive modeling to even control. And there is no way that you find one method which covers everything.
In analytics you want to gain insight. You want understandable, interpretable models, which tell you about the most influencing parameter and detect the pink noise. Rule and tree based methods are quite good for that. But to understand models better, you need to test them. Consequently, they also need to be computational. Fuzzy variants of rule, tree, cluster models are (by the use of fuzzy inference controllers).
For predictive modeling your models need to be computational. If we use fuzzy methods we apply regularization techniques to numerically optimize the fuzzy rules. So, we get accurate predictions by interpretable models. But highly accurate predictors might need, say, neural networks.
To calculate control functions for given goal parameters you need to apply an inverse problems view. To solve the backwards problems by creating many forward paths needs computational power and clever identification techniques.
This is why our machine learning framework comes with a suite of ml methods; kernel methods are numerically optimized in C++. They are integrated into Mathematica's declarative environment to make it easy to solve complex data mining tasks by arrangements of ml methods that are parallelized. Mlf task builders support ml developers to create such arrangements, meta-learning algorithms and automate cross-model testing ...
But even more, to extend the availability of ml methods, we provide a Mathematica-to-WEKA interface.

No comments:

Post a Comment