This month is a big month for Artificial Intelligence (AI). Facebook has already announced in the past week that it will dramatically increase its investment in AI research and development to ensure that it doesn’t fall behind as a technology innovator. Amazon for its part at its NYC Summit 2018 announced new capabilities for its artificial intelligence machine learning and compute services on the AWS cloud including a way to build new models while Google Cloud Next ’18 conference next week is expected to lift the lid on a number of new AI initiatives.
All three companies, along with Microsoft, are looking to gain dominance in a market that is still only starting to grow. To do that they need to offer enterprises a way of using and incorporating AI into their business DNA that will not disrupt business processes and business strategies. What do these companies need to offer? Pascal Kaufmann is a neuroscientist and AI entrepreneur and founder of Switzerland-based Starmind a technology which applies neuroscientific principles to AI development. It identifies experts on any subject within an organization and connects them to those fellow members of the organization who need that expertise most. Kaufmann said that AI, when applied to businesses, has the most impact if the following three conditions are met:
The ROI [Return On Ivestment] of an AI technology can be quantified best when it is benchmarked with human workers doing the same job: For example, even if the machine was 10 times slower, being 20 times more cost effective would already result in a convincing business case. Related Article: 6 Ways Artificial Intelligence Will Impact the Future Workplace
“’When you do AI right, it generates value and ROI for the enterprise’ is an excellent premise, however the full potential of AI hasn’t been attained,” AJ Abdallat, CEO of Glendale, Calif.-based Beyond Limits said. “Many conventional AI systems are merely machine learning, or neural networks, or deep learning. They’re good at handling large sets of data but lack situational awareness or the ability to navigate around missing or incomplete data. They get stuck.” He cites the example of a machine learning system that can be trained to identify photos of chairs but will never know what a human being is or why a person might need three kinds of chairs for different reasons. As AI systems acquire cognitive reasoning abilities — an evolutionary leap beyond conventional AI — they employ a human-like ability to perceive, understand, correlate, learn, teach, reason, and solve problems faster than conventional AI solutions. “Cognitive AI systems are designed to magnify human talent, providing actionable information faster, reducing risk and identifying opportunity. In our view, the real potential of AI is a symbiotic relationship with people, almost like an assistant that enables humans to apply their attention, experience, and passions to solving problems that truly matter,” he said.
Intelligence is also moving to the edge. With cognitive intelligence and situational awareness embedded in edge devices, these devices will be able to read sensor data and analyze it in the context of historical data, human expertise, and overall system performance goals to solve problems on the spot, in real-time. “This has profound positive implications to deliver the benefits of AI to industries as diverse as healthcare applications in clinical patient care, industrial process control in remote or dangerous locations, or bringing human expertise to every node in a network, no matter how geographically dispersed,” he added. It doesn’t end there though. The next big era in AI and the technology that will bring immense ROI for enterprises is intelligent hardware. Even where organizations have access to affordable, powerful tools and hardware that make this possible, providing access to the data is only part of the equation ThoughtSpot’s Chief Data Officer, Doug Bordonaro, added. Employees must be able to assess the value of the data they have and interpret it properly. Not everyone needs to be a data scientist to get value from data, but everyone needs to be data-literate. “Once employees have access to data, they need to be able to view it, manipulate it, and share results with colleagues. Confining data to a desktop application is limiting and leads to inconsistencies as information gets out of date. Having a common platform for viewing, analyzing, and sharing data is helpful. It provides a single source of truth, ensuring everyone has access to the latest information. It’s also much easier to enforce policies around security and governance when data is centrally stored and managed,” he said. ” Related Article: 8 Examples of Artificial Intelligence (AI) in the Workplace
At the heart of all AI is a model. Mac Steele, director of product at San Francisco-based Domino Data Lab said that organizations fail to achieve the promise of their models (and AI overall) because they assume models should be managed like other assets, data and software, when they are quite different. To be successful in building, deploying and sustaining models at large scale, companies need to develop an organizational capability of model management. Leading companies have built a strategy comprised of five elements. Steel outlines the elements as follows: Model Technology – The software tooling and infrastructure stack that gives data scientists the agility they need to build and deploy models. Model Development – Business processes and systems that allow data scientists to rapidly develop models, experiment, and drive breakthrough research. Model Production – The mechanism(s) of operationalizing data science research projects to a live product or output that affects the business. Model Governance – The ability to constantly monitor the activity, performance, and impact of models and data science initiatives across the organization. Model Context – At the heart of Model Management, Model Context encompasses all knowledge, insights, and artifacts generated while building or using models. This is often a company’s most valuable IP. The ability to find, reuse, and build upon it is critical to driving rapid innovation and a model-driven culture.
In practical terms, according to Praful Krishna, foudner of San Francisco-based Corseer, this means four things in the workplace. Successful AI platforms that provide a worthwhile ROI should do the following: Train without annotated data – In any enterprise AI project, annotating the training data is the most cumbersome and least enjoyable step. Any AI platform that can train without this burden on the users can deliver very high ROI. Offer adaptable platforms – AI platforms that are adaptable and let their users model bespoke solutions are more accurate and deliver higher ROI, than AI products that try to take a cookie cutter approach. A reason for this is that AI solutions train on very pertinent data while AI products are really solving somebody else’s problem. Be able to ingest any data – Enterprises, even the ones with data lakes in place, have their data stored in various media, formats or access levels. An AI platform that can ingest all this data in a relatively automated way is usually more successful. Offer transparent AI – A disproportionate number of AI projects get stalled because they are blackboxes that do not explain why some mortgage application was declined, or why certain diagnosis was made. The current regulatory impetus is about explainable AI platforms.
However for an enterprise to do this ,according to Tom Wilde, CEO of AI startup Indico they should be considering the implementation of platforms that offer the following. Incorporate explain-ability into the learning models it produces – This enables the data science team to be able to work effectively with the business/SMEs and demonstrate credibility as well as meet the variety of requirements around compliance, etc. Without it, the business is very unlikely to participate. Data segregation – Enterprises should not allow their data to be incorporated back into vendors’ data models to improve that vendors’ models and benefit other companies/competitors that might work with that vendor. Any valid enterprise platform should be able to incorporate clear boundaries here. Manage both structured and unstructured content – Most enterprise platforms today are well-suited for the former — but do not work well with the latter — messy, document-based text and images that make up over 80 percent of the data in most enterprises. A collaboration framework/tool – This enables data scientists and SMEs to easily work together to evaluate model performance, label data quickly, adjust on the fly, etc. If the business can’t participate in an efficient manner, they won’t participate, and AI will be very limited in its business impact for the enterprise.
This website uses cookies. By continuing to use this website or by clicking “Accept All Cookies,” you are giving consent to cookies being used. For more information on cookies and how you can disable them visit our Cookie Policy.