Google’s Tangent, Salesforce’s myEinstein, Intel-AMD partnership, and HPE’s Superdome Flex among today’s top stories in data science news.
Announcing Python library Tangent
Google introduces Tangent, a Python library for automatic differentiation
Google has announced a new, open-source Python library for automatic differentiation called Tangent. In contrast to existing machine learning libraries, Tangent is a source-to-source system, consuming a Python function f and emitting a new Python function that computes the gradient of f. This allows much better user visibility into gradient computations, as well as easy user-level editing and debugging of gradients. Tangent is useful to researchers and students who not only want to write their models in Python, but also read and debug automatically-generated derivative code without sacrificing speed and flexibility.
Salesforce in news
Salesforce announces machine learning platform myEinstein to build custom AI apps
Salesforce has unveiled a machine learning platform myEinstein at its annual Dreamforce conference on Monday. The myEinstein platform enables users to develop custom AI apps “with clicks, without being a data scientist.” The tool has two new services: Einstein Prediction Builder and Einstein Bots. Einstein Prediction Builder enables automatic creation of custom AI models that can predict outcomes for any field or object in Salesforce. Whereas Einstein Bots is a service which can be trained to augment customer service workflows by automating tasks such as answering questions and retrieving information.
Salesforce, Google form strategic partnership on cloud
Salesforce and Google have entered into a cloud partnership that could provide easier integration between Salesforce tools and Google’s G Suite and Google Analytics. Salesforce plans to use Google Cloud Platform (GCP) for its core services as part of its international infrastructure expansion.
Intel-AMD partnership to target Nvidia
Intel teams up with AMD for semi-custom GPU for next-gen mobile chips
In a bid to counter rival Nvidia, Intel has joined hands with AMD to create a next-generation notebook chip. Intel said the new chips will be part of its 8th-generation Core H mobile processors, and will not only feature a discrete-level graphics cards, but also have built-in High Bandwidth Memory (HBM2) RAM packed onto a single board. While more information will be available in future, the first machines with the new technology will be released in the first quarter of 2018.
New analytics platforms announced
Rockwell unveils Project Scio, a scalable analytics platform for industrial IoT applications
Rockwell Automation has announced Project Scio, a scalable and open platform that gives users secure, persona-based access to all data sources, structured or unstructured. The company said that Scio offers a configurable, easy-to-use interface with which “all users can become self-serving data scientists to solve problems and drive tangible business outcomes.” It can also intelligently fuse related data, delivering analytics in intuitive dashboards – called storyboards – that users can share and view. “Providing analytics at all levels of the enterprise – on the edge, on-premises or in the cloud – helps users have the ability to gain insights not possible before,” said John Genovesi, vice president of Information Software, Rockwell Automation. “When users gain the ability to fuse multiple data sources and add machine learning, their systems could become more predictive and intelligent.”
HPE launches Superdome Flex platform for high performance data analytics for mission critical workloads
Hewlett Packard Enterprise (HPE) has unveiled HPE Superdome Flex, a highly scalable and modular in-memory computing platform. The platform enables enterprises of any size to process and analyze massive amounts of data and turn it into real-time business insights. “With HPE Superdome Flex, customers can capitalize on in-memory data analytics for their most critical workloads and scale seamlessly as data volumes grow,” said Randy Meyer, vice president at HPE.
Other news in data science
Google releases its internal tool Colaboratory
Google has released yet another internal development tool in Colaboratory. Built on top of the open-source Jupyter project, Colaboratory is both an education tool as well as one meant for collaboration for research. With Colaboratory, users create notebooks, or documents, that can be simultaneously edited like Google Docs, but with an added ability to run code and show that code’s output within the document. It supports Python 2.7 and has to be used on Google Chrome. The software is also integrated with Google Drive.
Neuromation announces ICO to facilitate AI adoption with blockchain-powered platform
Neuromation is utilizing blockchain technology to create a marketplace, the Neuromation Platform, which will connect multiple parties and bridge the gap between research, design and implementation stages of AI modeling in a cost-effective manner. In this connection, Neuromation ICO is in its pre-sale stage, which will end with the public sale, starting on Nov. 28 and ending on Jan. 1, 2018. Out of the total of 100,000,000 Neurotokens, 60,000,000 be available for distribution, with each token priced at 0.001 ETH. According to the project roadmap, second version of Neuromation Platform will be launched in Q2 2018, and then v3 will be launched with a custom blockchain in Q3 2018.
DefinedCrowd unveils data platform API at Web Summit 2017
Seattle-based startup DefinedCrowd Corp. announced the release of version 1.0 of their public API at Web Summit 2017 in Lisbon. The product, which will be generally available on November 8, helps companies create new projects, upload tasks, and execute data collections and data processing campaigns in a more streamlined way, directly from their own data and machine learning infrastructure. “The life of data scientists will become easier with this API,” said CEO and Founder Daniela Braga. “They will have the option to integrate their data platforms with DefinedCrowd, having complete control of their projects, working from their own platforms. This will give them direct access to high-quality large-scale data with very little overhead.”