Why The Golden Age Of Machine Learning is Just Beginning

Even though the buzz around neural networks, artificial intelligence, and machine learning has been relatively recent, as many know, there is nothing new about any of these methods. If so many of the core algorithms and approaches have been around for decades, why is it just now that they are getting their day in the sun?

To answer that question, we can take a look at what has happened over the last five years or so with the attention and tooling around data. And we can also point to the dramatic increase in scalable compute power, or to be more specific about it, performance per watt and bit. These two factors combined have fed the development fury, growing data analysis well beyond the standard database and calculation approaches that have themselves been around for decades. The point is, we are at peak “data hype”—there was a rush to develop a host of new tools and frameworks (Hadoop, as but one example) to support larger, more complex datasets, then a secondary effort to push the performance of the data analysis on new or enhanced frameworks.

So could it be that machine learning in particular is the next natural step for all the companies and end users who have climbed aboard the data express? Indeed, the attention around large-scale, complex analytics and the systems and frameworks to support them spurred some of that evolution. But ultimately, one could make the argument that for some analytical workloads, in both the research and enterprise spaces, those advances have hit their own peak. All of the new methods and approaches that grew from the fertile “big data” soils have been sown and tested. And there is, again, for a narrow (but growing) set of workloads, room for another way of thinking about complex problem solving.

This is not to say that there hasn’t’ been ongoing research and development around new machine learning approaches that can leverage ultra-scalable hardware. But there is a bigger story, explains Patrick Hall, who has the unique position of being the senior machine learning scientists at statistics software giant, SAS. His title is noteworthy because he is finding solutions to problems that don’t fit well into classical statistical modeling approaches (which is what his company specializes in) with the goal of integrating those methods into existing enterprise products—at least at some point.

Hall’s assertion is that while all of the aforementioned trends are pushing machine learning to the forefront, the one thing that is different now is that data finally exists in sufficient volume that it does not work well for statistical analysis. That, coupled with the new developments in machine learning algorithms, means that the golden age of machine learning is finally arriving.

“This is data that can be found in many places; it’s wider than it is long—it has more columns than rows, more variables and observations. All of that is a bad fit for traditional statistics. There is now more data with correlated variables (for instance, pixels that are related in image data) or even in text mining.” Hall says equally, there is a wealth of new data from a range of sources that is defined by missing or sparse data where 1 percent or less of an entire dataset contains actual variables.

And for businesses that want to invest in analyzing this data where traditional statistics don’t fit, there is a huge opportunity–one that is feeding a new wealth of startups and new initiatives from established analytics companies who seem to be getting the message that calling a product “machine learning” even if it’s just a slightly upped version of analytics, is the rage. That causes a problem of definition, and there are, without naming names, some serious examples of analytics and BI companies taking the same old software and slapping a “machine learning” label on it simply because it sounds more robust or complex than data analytics. This is one of the growing pains for any new technology area, especially when the hype machine revs its mighty engines. Hall says users need to understand their data and problem and once that happens, it will be clear whether or not a standard statistics and database solution will suit versus something more versatile (and likely complex).

This isn’t to say that every traditional statistics and database company is changing its product messaging instead of the technology around machine learning. SAS introduced its first data mining product in the late 1990s (Enterprise Data Miner) and at the time, it had many of the machine learning models that are garnering all the hype lately (neural networks, decision trees, K means clustering, etc). There were, even then, Hall says, some emerging use cases where data was coming from the enterprise data warehouse to fit against models that lacked any parametric assumptions. So it’s not new—but the scope and number of those problems is growing, even in places where one might not expect it.

Among the enterprise arenas ripe for a machine learning boom are banking, insurance, and the credit card industries. Interestingly, all three of these are examples of regulated markets where having a black box approach to a problem is problematic for regulators. “There is always a tradeoff with machine learning. You trade interpretability for what you’re hoping is more accuracy and this is a tough tradeoff for regulated industries, but the fact is, they are seeing an opportunity finally and this tradeoff is one they are increasingly comfortable with.”

Hall and his company are well aware that they will have to keep innovating on both the language and product level to keep pace with the wave of machine learning startups that are being funded one by one. “There is indeed a lot of competition for attention right now,” he agrees. “We are trying to adapt our technology to these problems with concurrency and scalability for machine learningbut this is SAS, which means we are confined to a language syntax that, admittedly, looks old.” He says that even though the technology is jut as robust as ever, SAS is “stuck” because changing the core syntax means that the mainframes at American Express and Bank of America will come crashing down. “What we can do is change what runs behind that syntax, and that is what we are working on now.”

It is hard to say at this point how large-scale enterprises will think about all of that data in their warehouses that doesn’t fit the standard regression modeling bill. But to do be fair, doing more complex things with familiar frameworks and approaches is going to have its value, especially for regulated industries who are looking to beef up their analysis using machine learning methods since at least there is a root level of formality and familiarity. This is where SAS is hoping to succeed with its foray into machine learning for large enterprise—and where some of the emerging startups will have a tough time moving past consumer-focused image and facial recognition, speech recognition, or other areas.

It also might be too soon to say that machine learning is seeing the dawn of its golden age, but there is something on the horizon, glinting in the distance. Given the wealth of new investment and attention around machine learning as next great partner for the big data tools and approaches, this does not seem like a stretch.

Why The Golden Age Of Machine Learning is Just Beginning的更多相关文章

  1. How do I learn machine learning?

    https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644   How Can I Learn X? ...

  2. Machine Learning for Developers

    Machine Learning for Developers Most developers these days have heard of machine learning, but when ...

  3. Course Machine Learning Note

    Machine Learning Note Introduction Introduction What is Machine Learning? Two definitions of Machine ...

  4. Machine Learning读书会,面试&算法讲座,算法公开课,创业活动,算法班集锦

    Machine Learning读书会,面试&算法讲座,算法公开课,创业活动,算法班集锦 近期活动: 2014年9月3日,第8次西安面试&算法讲座视频 + PPT 的下载地址:http ...

  5. 【Machine Learning】决策树案例:基于python的商品购买能力预测系统

    决策树在商品购买能力预测案例中的算法实现 作者:白宁超 2016年12月24日22:05:42 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本 ...

  6. Practical Machine Learning For The Uninitiated

    Practical Machine Learning For The Uninitiated Last fall when I took on ShippingEasy's machine learn ...

  7. 《Machine Learning》系列学习笔记之第一周

    <Machine Learning>系列学习笔记 第一周 第一部分 Introduction The definition of machine learning (1)older, in ...

  8. How to use data analysis for machine learning (example, part 1)

    In my last article, I stated that for practitioners (as opposed to theorists), the real prerequisite ...

  9. Machine Learning|Andrew Ng|Coursera 吴恩达机器学习笔记

    Week1: Machine Learning: A computer program is said to learn from experience E with respect to some ...

随机推荐

  1. 加密算法使用(五):RSA使用全过程

    RSA是一种非对称加密算法,适应RSA前先生成一对公钥和私钥. 使用公钥加密的数据可以用私钥解密,同样私钥加密的数据也可以用公钥解密, 不同之处在于,私钥加密数据的同事还可以生成一组签名,签名是用来验 ...

  2. MySQL基础 - 如何系统地学习数据库?

    对于数据库的认知,除了大学的时候上过数据库这门课,留下的印象大概就是几条SQL语句一些模棱两可的基本概念,直到工作后面临使用场景才发现数据库的重要性.故归纳总结一下自己的数据库学习之路. 学习资源: ...

  3. libevent+bufferevent总结

    libevent+bufferevent总结 1 学习参考网址 libevent学习网址:http://blog.csdn.net/feitianxuxue/article/details/93725 ...

  4. LINUX SSH显示中文乱码

    ssh登陆后,执行: export LANG=zh_CN.gb2312就可以显示中文了.编辑/etc/sysconfig/i18n 将LANG="zh_CN.UTF-8" 改为 L ...

  5. Java 环境下使用 AES 加密的特殊问题处理

    在 Java 环境下使用 AES 加密,在密钥长度和字节填充方面有一些比较特殊的处理. 1. 密钥长度问题 默认 Java 中仅支持 128 位密钥,当使用 256 位密钥的时候,会报告密钥长度错误 ...

  6. 学习笔记——Maven实战(六)Gradle,构建工具的未来?

    Maven面临的挑战 软件行业新旧交替的速度之快往往令人咂舌,不用多少时间,你就会发现曾经大红大紫的技术已经成为了昨日黄花,当然,Maven也不会例外.虽然目前它基本上是Java构建的事实标准,但我们 ...

  7. Object C学习笔记26-文件管理(二)

    上一篇简单的介绍了如何获取文件属性,删除,拷贝文件等,本文继续记录Object C中文件IO操作. 一. 获取文件的执行主目录 在Object C中提供了一个方法 NSHomeDirectory() ...

  8. 附加到iis进程调试时找不到w3wp.exe

    在进程列表的下面,有个show processes in all sessions,把它勾上就能看到了

  9. HDU 3401 Trade dp+单调队列优化

    题目链接: http://acm.hdu.edu.cn/showproblem.php?pid=3401 Trade Time Limit: 2000/1000 MS (Java/Others)Mem ...

  10. Photoshop之渐变工具使用

    最上面两个游标控制不透明度 下面两个控制渐变位置 点击游标可以设置颜色 基于每个游标进行操作