Hadoop - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Mon, 17 Jun 2024 05:43:36 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png Hadoop - AI-Tech Park https://ai-techpark.com 32 32 Scality RING boosts genomics with petabyte-scale data lake https://ai-techpark.com/scality-ring-boosts-genomics-with-petabyte-scale-data-lake/ Fri, 14 Jun 2024 15:15:00 +0000 https://ai-techpark.com/?p=169444 Data-centric organizations in healthcare, financial, and travel services trust Scality RING as the foundation for AI-powered data lakes Scality, a global leader in cyber-resilient storage for the AI era, today announced a large-scale deployment of its RING distributed file and object storage solution to optimize and accelerate the data lifecycle for...

The post Scality RING boosts genomics with petabyte-scale data lake first appeared on AI-Tech Park.

]]>
Data-centric organizations in healthcare, financial, and travel services trust Scality RING as the foundation for AI-powered data lakes

Scality, a global leader in cyber-resilient storage for the AI era, today announced a large-scale deployment of its RING distributed file and object storage solution to optimize and accelerate the data lifecycle for high-throughput genomics sequencing laboratory SeqOIA Médecine Génomique. This is the most recent in a series of deployments where RING is leveraged as a foundational analytics and AI data lake repository for organizations in healthcare, financial services and travel services across the globe.

Selected as part of the France Médecine Génomique 2025 (French Genomic Medicine Plan), SeqOIA is one of two national laboratories integrating whole genome sequencing into the French healthcare system to benefit patients with rare diseases and cancer.

SeqOIA adopted Scality RING to aggregate petabyte-scale genetics data used to better characterize pathologies as well as guide genetic counseling and patient treatment. RING grants SeqOIA biologists efficient access from thousands of compute nodes to nearly 10 petabytes of data throughout its lifecycle, spanning from lab data to processed data, at accelerated speeds and a cost 3-5 times lower than that of all-flash file storage.

“RING is the repository for 90% of our genomics data pipeline, and we see a need for continued growth on it for years to come,” said Alban Lermine, IS and Bioinformatics Director of SeqOIA, “In collaboration with Scality, we have solved our analytics processing needs through a two-tier storage solution, with all-flash access of temporary hot data sets and long-term persistent storage in RING. We trust RING to protect the petabytes of mission-critical data that enable us to carry out our mission of improving care for patients suffering from cancer and other diseases.”

Scality RING powers AI data lakes for other data-intensive industries:
Customers report 59% lower TCO, 366% 5-year ROI and 34% more productive end users.

National insurance provider:
Scality RING powers AI-driven analytics for claim processing

One of the largest publicly held personal line insurance providers in the United States chose RING as the preferred AI-data lake repository for insurance analytics claim processing. The provider chose RING to replace its HDFS (Hadoop File System).

The customer has realized 3X improved space efficiency and cost savings, with higher availability through a multi-site RING deployment to support site failover.

Global travel services:
1 petabyte a day to power the world’s travel

A multinational IT services company whose technology fuels the global travel and tourism industry uses Scality RING to power its core data lake. RING supports one petabyte of new log data ingested each day to maintain a 14-day rotating data lake. This requires RING to purge (delete) the oldest petabyte each day, while simultaneously supporting 10s of gigabytes per second (GB/s) read access for analysis from a cluster of Splunk indexers.

For data lake deployments, these organizations require trusted and proven solutions with a long-term track record of delivering performance and data protection at petabyte-scale. For AI workload processing, they pair RING repositories in an intelligent tiered manner with all-flash file systems as well as leading AI tools and analytics applications, including Weka.io, HPE Pachyderm, Cribl, Cloudera, Splunk, Elastic, Dremio, Starburst and more. With strategic partners like HPE and HPE GreenLake, Scality can deliver managed AI data lakes. Learn more about how to unlock the full value of data wherever it lives at www.hpe.com.

Trusted and proven for AI-powered data lakes at petabyte-scale
Fast data processing is a no-brainer for any AI deployment, but to support world-class, petabyte-scale infrastructures, RING is the only solution that can give customers:

  • Cost savings with 366% five-year ROI
  • Best price/performance through optimal use of flash and HDD
  • Peace of mind with CORE5 end-to-end cyber-resiliency

“Selecting RING was the best decision for us at SeqOIA. RING provides the complete package of features for AI-powered data lakes,” said Alban Lermine. “RING is the most secure, scalable and cost-effective repository for petabyte-scale unstructured data on the market. We can collect, pre-process and analyze data from multiple data sources at dozens of GB/s.” 

RING S3 object storage for AI is unmatched with support for:

  • Retrieval-augmented generation (RAG) access from retrieval- and generative-based artificial intelligence models.
  • Integrated hybrid-cloud capabilities that enable RING to replicate and tier data to external public cloud services for integration with popular AI tools in AWS, Azure and Google.
  • Support for the customer’s choice of hybrid or all-flash storage servers.
  • CORE5 end-to-end cyber-resiliency capabilities to provide ransomware protection.

The combination of capabilities provides customers with a trusted data lake storage solution across multiple stages of the pipeline from data collection, cleansing, analysis, model development and training. RING provides organizations with high-performance and unbreakable data storage at an economic price point to enable 10s to 100s of petabytes for long-term AI data.

For more information about Scality AI Data Lakes go here: scality.com/AI/data-lake

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Scality RING boosts genomics with petabyte-scale data lake first appeared on AI-Tech Park.

]]>
Spirion announces enhancements to its Sensitive Data Platform https://ai-techpark.com/spirion-announces-enhancements-to-its-sensitive-data-platform/ https://ai-techpark.com/spirion-announces-enhancements-to-its-sensitive-data-platform/#respond Wed, 13 Apr 2022 14:45:00 +0000 https://ai-techpark.com/?p=68160 Sensitive Data Platform now offers automated classification and remediation based on context; AnyScan connector support for cloud and big data sources including Salesforce and Snowflake; and Azure private cloud hosting Spirion, a pioneer in data protection and compliance, today announced the release of major new enhancements to its Sensitive Data...

The post Spirion announces enhancements to its Sensitive Data Platform first appeared on AI-Tech Park.

]]>
Sensitive Data Platform now offers automated classification and remediation based on context; AnyScan connector support for cloud and big data sources including Salesforce and Snowflake; and Azure private cloud hosting

Spirion, a pioneer in data protection and compliance, today announced the release of major new enhancements to its Sensitive Data Platform, providing enterprises greater flexibility in how they find, organize, understand and act upon sensitive information to bolster governance, security and privacy programs and meet obligations under California Privacy Rights Act (CPRA), General Data Protection Regulation (GDPR) and other data privacy regulations.

Today’s release increases the depth of Spirion’s purposeful data classification schema to support the business context of sensitive information and how it’s being used, so organizations can better control and act to protect it through automated remediation. It also extends the breadth of Spirion’s accurate data discovery to encompass new AnyScan™ connectors for big data and cloud repositories including Snowflake, Salesforce, Confluence, Jira, Microsoft Planner, and a collection of Apache Hadoop sources. In addition, organizations can now host Spirion Sensitive Data Platform on their own private Azure cloud subscription.

“Context-rich classification imparts important insights into how an organization’s data is being used. By having this level of context, enterprises are well equipped to understand their data, know where it is located, identify its contents, and ultimately, make better decisions around it. Those decisions may affect privacy, security, governance, or all three,” states Ryan O’Leary, Research Manager, Privacy and Legal Technology for IDC.

“Data privacy and security require more than just knowing where data is,” said Rob Server, Spirion Field CTO. “It’s also about gaining greater clarity and transparency around how data is being used. Today’s new platform enhancements underscore Spirion’s commitment to finding sensitive data, wherever it lives, with unrivaled accuracy and protecting it against unauthorized access, use and modification through automated, context-rich classification and remediation to reduce financial, regulatory, and legal risks.”

Understand How Data is Being Used Through Context-Rich Data Classification

Data classification has always been a cornerstone of mature data governance programs. With growing threat surface risks arising from digital transformation, cloud migration and work from home initiatives, automated classification has become essential to an enterprise’s ability to understand their data, know where it is located, identify its contents — and ultimately make better decisions around it.

In addition to its current sensitivity classification (which sets confidentiality level), Spirion has added five new persistent classification categories out-of-the-box to give organizations more flexibility in how they can organize and precisely define their data to stay compliant, which include:

  • Process: provides context around business processes where the data is being used.
  • Purpose: identifies why a business is collecting information about an individual.
  • Preference: informs any customer preferences surrounding how the data can be used.
  • Regulatory: associates any regulations that would govern the data, such as CPRA, GDPR, PCI, or HIPAA.
  • Custom: allows organizations to easily create their own custom categories; for instance, classifying data by location of subjects, third-party processors, etc.

By auto-classifying documents based on content and context, organizations will gain additional insight about their data and how it is being used, so they can better understand its risk and enact appropriate controls to protect it. Spirion’s context-rich classification playbooks give organizations the ability to act on classifications to enforce controls through automated remediation.

Find Sensitive Information No Matter Where it Lives with AnyScan Connectors

Spirion has entered into a licensing agreement with CData, a leading provider of standards-based drivers for data integration, to expand the ability of Sensitive Data Platform to detect, classify and remediate sensitive data across more systems to ensure compliance with security, privacy, and regulatory mandates. The relationship will enable Spirion’s customers to scan for sensitive and restricted information in more than 200 disparate big data, SaaS, NoSQL, RDBMS, collaboration, ERP, accounting, and CRM data locations through plug-and-play AnyScan connectors designed to reduce integration time, cost and complexity.

Sensitive Data Platform’s initial AnyScan connectors provide connectivity for Salesforce, Snowflake, Jira, Confluence, Microsoft Planner, Apache Hadoop, Apache Hive, Hadoop Distributed File System (HDFS), Apache HBase, Apache Phoenix, and Apache Parquet. New connectors will be tested and released on a quarterly basis.

Self-Host Spirion on Your Own Private Azure Cloud

Spirion’s Sensitive Data Platform, Sensitive Data Finder and Sensitive Data Watcher solutions can now be hosted on a customer’s private Azure cloud subscription. This approach gives customers all the benefits of cloud services (cost savings, security, less maintenance) while retaining control over their Spirion tenant and data. To self-host Spirion on a private Azure cloud, customers must have an active Azure tenant running on Linux OS. The set-up takes less than 30-minutes to configure.

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to AI-Techpark.com.

The post Spirion announces enhancements to its Sensitive Data Platform first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/spirion-announces-enhancements-to-its-sensitive-data-platform/feed/ 0
Infoworks.io announces Infoworks Replicator 4.0 https://ai-techpark.com/infoworks-io-announces-infoworks-replicator-4-0/ https://ai-techpark.com/infoworks-io-announces-infoworks-replicator-4-0/#respond Fri, 08 Apr 2022 09:45:00 +0000 https://ai-techpark.com/?p=67536 Automation renders hand coding and legacy point tools obsolete Infoworks.io announces Infoworks Replicator 4.0, enabling migration of on-premises Hadoop data lakes to the cloud three times faster with one-third the resources required of traditional approaches. Digital transformation is a critical imperative for enterprises, migrating data and analytics to the cloud is...

The post Infoworks.io announces Infoworks Replicator 4.0 first appeared on AI-Tech Park.

]]>
Automation renders hand coding and legacy point tools obsolete

Infoworks.io announces Infoworks Replicator 4.0, enabling migration of on-premises Hadoop data lakes to the cloud three times faster with one-third the resources required of traditional approaches. Digital transformation is a critical imperative for enterprises, migrating data and analytics to the cloud is an essential step. Infoworks Replicator has fundamentally changed the game.

Automation enables rapid migration with fewer specialized resources
Infoworks Replicator makes obsolete hand-coding and labor-intensive legacy point tool processes to enable cloud data migration with fewer specialized resources, at lower cost. By automating the process, data and metadata are rapidly migrated, and scarce expensive data talent is freed to focus on higher-value business priorities.

Continuous synchronization ensures seamless migration
Replicator runs as a service, maintaining continuous operation and synchronization between on-premises Hadoop and cloud clusters to ensure consistency and continuity. Replicator enables migrations of petabytes of data without risk of data loss or business disruption; automated error handling and fault tolerance mitigate the impact of network and node failures for seamless migration.

“Our differentiated approach to high-speed computation of changes and differences between the on-premises Hadoop cluster and the cluster in the cloud changes the game. We rethought the approach to data migration to meet modern needs of scale and speed which were previously unachievable,” said Amar Arsikere, Infoworks Chief Product Officer, CTO and co-founder. “Automation is essential to the success of any large-scale data migration.” 

Designed for today’s modern cloud data platform
Infoworks solves for the hurdles businesses face in cloud data and workload migration. Designed for large-scale hybrid and multi-cloud environments, Infoworks Replicator is extensible to the full Infoworks Platform. Infoworks provides customers a comprehensive solution for establishing a modern automated data platform – enabling unprecedented agility, scale, and simplicity from initial cloud migration to subsequent enterprise-wide data operations and orchestration.

To learn more about how to accelerate your Hadoop to cloud migration contact us at replicator@infoworks.io or visit us at https://www.infoworks.io/products/infoworks-replicator/

Keep informed of Infoworks developments here:
www.twitter.com/infoworksio 
www.linkedin.com/company/infoworks-io 

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to AI-Techpark.com.

The post Infoworks.io announces Infoworks Replicator 4.0 first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/infoworks-io-announces-infoworks-replicator-4-0/feed/ 0
Hadoop Market to Reach $842.25 Bn, Globally, by 2030 at 37.4% CAGR https://ai-techpark.com/hadoop-market-to-reach-842-25-bn-globally-by-2030-at-37-4-cagr/ https://ai-techpark.com/hadoop-market-to-reach-842-25-bn-globally-by-2030-at-37-4-cagr/#respond Fri, 11 Mar 2022 14:45:00 +0000 https://ai-techpark.com/?p=63195 Affordable and rapid data processing and rise in demand for data analytics with generation of large volumes of unstructured data drive the growth of the global Hadoop market. Allied Market Research published a report, titled, “Hadoop Market by Component (Hardware, Software, and Services), Deployment Model (On-premise, Cloud, and Hybrid), Enterprise Size...

The post Hadoop Market to Reach $842.25 Bn, Globally, by 2030 at 37.4% CAGR first appeared on AI-Tech Park.

]]>
Affordable and rapid data processing and rise in demand for data analytics with generation of large volumes of unstructured data drive the growth of the global Hadoop market.

Allied Market Research published a report, titled, “Hadoop Market by Component (Hardware, Software, and Services), Deployment Model (On-premise, Cloud, and Hybrid), Enterprise Size (Large Enterprises and SMEs), and Industry Vertical (Manufacturing, BFSI, Retail & Consumer Goods, IT & Telecommunication, Healthcare, Government & Defense, Media & Entertainment, Energy & Utility, Trade & Transportation, and Others): Global Opportunity Analysis and Industry Forecast, 2021-2030.” According to the report, the global Hadoop industry generated $35.74 billion in 2020, and is expected to reach $842.25 billion by 2030, witnessing a CAGR of 37.4% from 2021 to 2030.

Drivers, Restraints, and Opportunities

Affordable and rapid data processing and rise in demand for data analytics with generation of large volumes of unstructured data drive the growth of the global Hadoop market. However, rise in security concerns regarding distributed computing, Hadoop architecture, and access to fragmented data restrain the market growth. On the other hand, adoption of partnership strategies by market players and investments in Hadoop technologies present new opportunities in the coming years.

Covid-19 Scenario

  • During the Covid-19 pandemic, the adoption of Hadoop increased considerably with digital transformation taking place across different industries.
  • The implementation of “work from home” culture led to rise in demand for cloud-based Hadoop analytics for managing crucial information. This led to surge in overall revenue of the market.
  • Moreover, there has been a significant surge in demand for Hadoop software among small & medium and large enterprises to analyze the large chunks of unstructured data.

The service segment to continue its lead position in terms of revenue throughout the forecast period

Based on component, the service segment accounted for the highest market share in 2020, accounting for more than two-fifths of the global Hadoop market, and is estimated to continue its lead position in terms of revenue throughout the forecast period. This is due to fast, easy, and cost-effective management of large-scale data and hiring of outsourcing services such as Noah Data and IBM to meet Hadoop requirements. However, the software segment is expected to witness the highest CAGR of 38.4% from 2021 to 2030, owing to rise in number of data sets and surge in use by developers for coding real-time applications.

The IT & telecommunication segment to maintain its leadership status during the forecast period

Based on industry vertical, the IT & telecommunication segment contributed to the highest market share in 2020, holding nearly one-fifth of the global Hadoop market, and is projected to maintain its leadership status during the forecast period. This is attributed to adoption by large organizations for large-scale data analysis and processing, handling customer issues, and enhancing customer satisfaction through prompt response. However, the trade & transportation segment is expected to manifest the fastest CAGR of 42.6% from 2021 to 2030, owing to crucial role of Hadoop monitoring systems in ensuring passenger safety by analyzing the large amount of data generated by every part of a vehicle.

North America to maintain its dominance by 2030

Based on region, North America held the highest market share in terms of revenue in 2020, contributing to more than two-fifths of the global Hadoop industry, and is expected to maintain its dominance by 2030. This is attributed to surge in volume of raw, structured, and unstructured data and rise in demand for big data analytics. Furthermore, the need to avail flexibility and agility for businesses fuels adoption of Hadoop in the region. However, Asia-Pacific is estimated to register the fastest CAGR of 39.3% during the forecast period, owing to lower cost of Hadoop systems as compared to traditional systems and ability to store and process large chunks of data that is critical for applications built for populous countries.

For Purchase Enquiry: https://www.alliedmarketresearch.com/purchase-enquiry/835

Leading Market Players

  • Amazon Web Services
  • Cisco Systems, Inc.
  • Cloudera, Inc.
  • Datameer, Inc.
  • Hitachi Data Systems
  • Fair Isaac Corporation
  • MapR Technologies
  • MarkLogic
  • Microsoft Corporation
  • Teradata Corporation

Access AVENUE- A Subscription-Based Library (Premium on-demand, subscription-based pricing model) at:

hthttps://www.alliedmarketresearch.com/library-access

Avenue is a user-based library of global market report database, provides comprehensive reports pertaining to the world’s largest emerging markets. It further offers e-access to all the available industry reports just in a jiffy. By offering core business insights on the varied industries, economies, and end users worldwide, Avenue ensures that the registered members get an easy as well as single gateway to their all-inclusive requirements.

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to AI-Techpark.com.

The post Hadoop Market to Reach $842.25 Bn, Globally, by 2030 at 37.4% CAGR first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/hadoop-market-to-reach-842-25-bn-globally-by-2030-at-37-4-cagr/feed/ 0
Global Hadoop-as-a-Service Market to 2027 https://ai-techpark.com/global-hadoop-as-a-service-market-to-2027/ https://ai-techpark.com/global-hadoop-as-a-service-market-to-2027/#respond Tue, 22 Feb 2022 18:00:00 +0000 https://ai-techpark.com/?p=59926 The “Hadoop-as-a-Service Market, By Deployment Type (Run it Yourself (RIY) and Pure Play (PP) ) Organization Size (Large Enterprises, Small & Medium Enterprises) and End User – Global Forecast to 2027” report has been added to ResearchAndMarkets.com’s offering. According to this the total market is expected to grow at a CAGR of 39.2% during...

The post Global Hadoop-as-a-Service Market to 2027 first appeared on AI-Tech Park.

]]>
The “Hadoop-as-a-Service Market, By Deployment Type (Run it Yourself (RIY) and Pure Play (PP) ) Organization Size (Large Enterprises, Small & Medium Enterprises) and End User – Global Forecast to 2027” report has been added to ResearchAndMarkets.com’s offering.

According to this the total market is expected to grow at a CAGR of 39.2% during the forecast period from 2021 to 2027.

This report covers a sub-market in this field – the Hadoop-as-a-Service Market by deployment type in detail, segmenting the market as run it yourself (RIY) and pure play (PP). The scope of the report covers hadoop as a service by organization size which includes large enterprises, small & medium enterprises.

It provides in-sights on end user that segregates into BFSI, healthcare & life sciences, retail and consumer goods, it & telecommunication, education, manufacturing, media & entertainment, government & defense and others. Lastly, the Hadoop-as-a-Service Market is segmented by geography across North America, Europe, Asia-Pacific (APAC), ROW (Rest of the World) and regional market further sub-segmented by countries.

The report deals with all the driving factors, opportunities, and challenges with respect to the global Hadoop-as-a-Service Market, which are helpful in identifying trends and key success factors for the industry. Impact analysis of the market dynamics with factors currently driving and restraining the growth of the market, along with their impact in the short, medium, and long term landscapes.

The report also includes qualitative analysis on the market, by incorporating complete analysis of industry value chain, funding and investments, Porter’s analysis and PEST (Political, Economic, Social & Technological factor) analysis of the market. The report profiles all major companies active in this field. This report provides the competitive landscape of the key players, which covers all key growth strategies. Moreover, the report formulates the entire value chain of the market, along with industry trends of sports analytics with emphasis on market timelines & technology roadmaps, market and product life cycle analysis.

Reasons to purchase this Report:

  1. Determine prospective investment areas based on a detailed trend analysis of the global Hadoop-as-a-Service Market over the next years.
  2. Gain an in-depth understanding of the underlying factors driving demand for different and Hadoop-as-a-Service Market segments in the top spending countries across the world and identify the opportunities offered by each of them.
  3. Strengthen your understanding of the market in terms of demand drivers, industry trends, and the latest technological developments, among others.
  4. Identify the major channels that are driving the global Hadoop-as-a-Service Market, providing a clear picture of future opportunities that can be tapped, resulting in revenue expansion.
  5. Channelize resources by focusing on the ongoing programs that are being undertaken by the different countries within the global Hadoop-as-a-Service Market.
  6. Make correct business decisions based on a thorough analysis of the total competitive landscape of the sector with detailed profiles of the top Hadoop-as-a-Service Market providers around the world which include information about their products, alliances, recent contract wins and financial analysis wherever available.

Companies Mentioned

  • Microsoft Corporation
  • Amazon web services
  • IBM Corporation
  • Cloudera Inc.
  • MapR Technologies
  • Google Inc.
  • EMC Corporation
  • SAP SE
  • Datameer
  • Mortar Data (Datadog)

Major Classifications are as follows:

By Deployment Type

  • Run it Yourself (RIY)
  • Pure Play (PP)

By Organization Size

  • Large Enterprises
  • Small & Medium Enterprises

By End User

  • BFSI
  • Healthcare & Life Sciences
  • Retail and Consumer Goods
  • IT & Telecommunication
  • Education
  • Manufacturing
  • Media & Entertainment
  • Government & Defense
  • Others

By Region

  • North America
  • US
  • Canada
  • Europe
  • UK
  • Germany
  • France
  • Rest of Europe
  • Asia-Pacific (APAC)
  • China
  • Japan
  • India
  • Rest of APAC
  • Rest of the World (RoW)
  • Middle East
  • Africa
  • South America

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

The post Global Hadoop-as-a-Service Market to 2027 first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/global-hadoop-as-a-service-market-to-2027/feed/ 0
Next Generation of Oracle Autonomous Data Warehouse Available https://ai-techpark.com/next-generation-of-oracle-autonomous-data-warehouse-available/ https://ai-techpark.com/next-generation-of-oracle-autonomous-data-warehouse-available/#respond Thu, 18 Mar 2021 10:15:00 +0000 https://ai-techpark.com/?p=16497 Provides easy-to-use, no-code tools that empower Data Analysts to do tasks that previously required data engineers and data scientists Puts faster, more powerful insights within the reach of organizations of all sizes Today Oracle announced a set of innovative enhancements to Oracle Autonomous Data Warehouse, the industry’s first and only self-driving...

The post Next Generation of Oracle Autonomous Data Warehouse Available first appeared on AI-Tech Park.

]]>
Provides easy-to-use, no-code tools that empower Data Analysts to do tasks that previously required data engineers and data scientists

Puts faster, more powerful insights within the reach of organizations of all sizes

Today Oracle announced a set of innovative enhancements to Oracle Autonomous Data Warehouse, the industry’s first and only self-driving cloud data warehouse. With this latest release, Oracle goes beyond other cloud offerings by completely transforming cloud data warehousing from a complex ecosystem of products, tools, and tasks that requires extensive technical expertise, time and money to perform data loading, data transformation and cleansing, business modeling, and machine learning into an intuitive point-and-click, drag-and-drop experience for data analysts, citizen data scientists, and business users.  As a result, Oracle Autonomous Data Warehouse empowers organizations of all sizes—from the smallest to the largest—to get significantly more value from their data, achieve faster results, accelerate insights, and improve productivity while lowering costs with zero administration.

The latest enhancements to Oracle Autonomous Data Warehouse provide a single data platform built for businesses to ingest, transform, store, and govern all data to run diverse analytical workloads from any source, including departmental systems, enterprise data warehouses and data lakes.

“Oracle Autonomous Data Warehouse is the only fully self-driving cloud data warehouse today,” said Andrew Mendelsohn, executive vice president, database server technologies, Oracle.  “With this next generation of Autonomous Data Warehouse, we provide a set of easy-to-use, no-code tools that uniquely empower business analysts to be citizen data scientists, data engineers, and developers.”

Citizen data scientists and analysts will also benefit from powerful new self-service graph modeling and graph analytics. To empower developers to build data-driven applications, Oracle offers Oracle APEX (Application Express) Application Development, a low-code application development tool built directly into its cloud data warehouse, as well as RESTful services, which makes it easy for any modern application to interact with warehouse data. Unlike other vendors’ single-purpose, isolated databases in the cloud, Oracle Autonomous Data Warehouse provides support for multi-model, multi-workload, and multi-tenant requirements—all within a single, modern converged database engine—including JSON document, operational, analytic, graph, ML, and blockchain databases and services.

New Innovations in Oracle Autonomous Data Warehouse
The latest release includes many new innovations, not only a broad set of capabilities that make it easier for analysts, citizen data scientists, and line-of-business developers to take advantage of the industry’s first and only self-driving cloud data warehouse, but also features that deliver deeper analytics and tighter data lake integration. Key capabilities include:

  • Built-in Data Tools: Business analysts now have a simple, self-service environment for loading data and making it available to their extended team for collaboration. They can load and transform data from their laptop or the cloud by simply dragging and dropping. They can then automatically generate business models; quickly discover anomalies, outliers and hidden patterns in their data; and understand data dependencies and the impact of changes.
  • Oracle Machine Learning AutoML UI: By automating time-intensive steps in the creation of machine learning models, the AutoML UI provides a no-code user interface for automated machine learning to increase data scientist productivity, improve model quality and enable even non-experts to leverage machine learning.
  • Oracle Machine Learning for Python: Data scientists and other Python users can now use Python to apply machine learning on their data warehouse data, fully leveraging the high-performance, parallel capabilities and 30+ native machine learning algorithms of Oracle Autonomous Data Warehouse.
  • Oracle Machine Learning Services: DevOps and data science teams can deploy and manage native in-database models and ONNX-format classification and regression models outside Oracle Autonomous Data Warehouse, and can also invoke cognitive text analytics. Application developers have easy-to-integrate REST endpoints for all functionality.
  • Property Graph Support: Graphs help to model and analyze relationships between entities (for example, a social network graph). Users can now create graphs within their data warehouse, query graphs using PGQL (property graph query language) and analyze graphs with over 60 in-memory graph analytics algorithms.
  • Graph Studio UI: Graph Studio builds on property graph capabilities of Oracle Autonomous Data Warehouse to make graph analytics easier for beginners. It includes automated creation of graph models, notebooks, integrated visualization and pre-built workflows for different use cases.
  • Seamless Access to Data Lakes: Oracle Autonomous Data Warehouse extends its ability to query data in Oracle Cloud Infrastructure (OCI) Object Storage and all popular cloud object stores with three new data lake capabilities: easy querying of data in Oracle Big Data Service (Hadoop); integration with OCI Data Catalog to simplify and automate data discovery in object storage; and scale-out processing to accelerate queries of large data sets in object storage.

What Customers Are Saying
“By using Oracle Analytics Cloud and Autonomous Data Warehouse, we’re able to apply machine learning and spatial analysis to better track check cashing behavior that mitigates risk and prevents fraud in real-time to help businesses and consumers more confidently engage in commerce,” said Eric Probst, Senior Manager, Fraud Analytics, Certegy.

“With Oracle Autonomous Data Warehouse and APEX, I not only have a world-class, scalable, super-secure, super-powerful database engine, but with the built-in application development tools, I can also build and deploy applications almost right away so that I can get people access to data,” said Frank Hoogendoorn, Chief Data Officer, MineSense. “I don’t know of any other platform where I can do that out of the box.”

“Having innovative capabilities for loading data that’s built right into Oracle Autonomous Data Warehouse should save us a tremendous amount of time,” said Derek Hayden, SVP of Data Strategy and Analytics, OUTFRONT Media. “The declarative extract, load, and transform with its drag-and-drop functionality will enable us to quickly load and transform multiple data types, and see the relationships within the data through the auto-insights capability.”

“Oracle Autonomous Data Warehouse has reduced time-to-market for a typical data warehouse project from three months to three days, while delivering deeper and more actionable insights,” said Steven Chang, CIO, Kingold. “Being able to benefit from increased automation for data ingestion, transformation, building business models and getting insights is excellent news, and we’re looking forward to using those capabilities.”

What Analysts Are Saying
“Our research, based on interviews with several customers around the globe, shows that those Oracle Autonomous Data Warehouse customers have achieved approximately 63 percent reduced total cost of operations, while increasing the productivity of data analytics teams by 27 percent, with breakeven on their investment having occurred in an average of five months,” said Carl Olofson, Research Vice President, Data Management Software, IDC. “This ROI included significant productivity gains across data, analytics, and developer teams. While individual customer results may vary, the benefits found in this study are indicative of the kind of improvements that most may expect. With these new intuitive integrated tools incorporated in Oracle Autonomous Data Warehouse, it is reasonable to expect that productivity gains will further increase, enabling businesses to achieve an even better ROI.”

“Oracle Autonomous Database in all its flavors continues without a response from competitors even after three years in the market,” said Holger Mueller, Vice President and Principal Analyst, Constellation Research. “Now Oracle is adding to that lead with enhancements to Oracle Autonomous Data Warehouse that aim to democratize all aspects of analytics and machine learning by eliminating the need for users to know SQL. Instead, Oracle provides drag-and-drop UIs and AutoML for building and testing machine learning models, so that business users can do their own data explorations without depending on IT, DBAs, or system administrators to manage the data. All of this is built on Oracle’s converged database foundation which gives users access to all data models and types within a single database.”  

“The objective of IT automation is to remove IT from the day-to-day workflows and allow the lines of business to work directly to define and mine the data that matters,” said David Floyer, CTO & Co-founder of Wikibon. “The Oracle Autonomous Data Warehouse now allows end-users to use drag-and-drop and low-code technologies to define the data requirements for a wide variety of end-user tools such as Tableau and Qlik. Oracle Autonomous Data Warehouse has improved spatial, graph, and ML analytics available on-premises or in public clouds with improved real-time performance. Oracle is cool again.”

“Oracle continues to make life dramatically easier for anyone associated with data and its value,” said Mark Peters, Principal Analyst & Practice Director, Enterprise Strategy Group. “Having started by helping DBAs and system administrators with its self-driving Autonomous Database, Oracle is now broadly extending the productivity and efficiency benefits of its Autonomous Data Warehouse so that everyone from data analysts, citizen data scientists, and business users can leverage it in easy and familiar ways. The drag-and-drop UIs and low-code interfaces simplify everything from data loading and analysis to building machine learning models. While Oracle’s competition—which often still requires extensive expertise, third-party tools or retrieving data manually from external databases—has work to do to better address the needs of non-technical personas, Oracle is there now.”  

“Enabling data analysts, citizen data scientists, and business users to create and analyze their own data sets with self-service tools avoids IT bottlenecks and significantly improves their productivity. This is exactly what Oracle has done with its enhancements to Autonomous Data Warehouse,” said Bradley Shimmin, Chief Analyst, Omdia. “Oracle is equipping integrated tools with intuitive drag-and-drop interfaces that make it easier for data analysts to load, transform, and clean data; further, they can leverage machine learning to automatically create business models and discover patterns, thereby generate insights—leading to better and faster business decisions.”

“Just as some data warehouse clouds are trying to figure out how they play well with machine learning, Oracle has moved the goal posts by a lot,” said Marc Staimer, President of DS Consulting and Wikibon analyst. “Oracle’s Autonomous Data Warehouse now includes Auto-ML. Oracle Autonomous Data Warehouse has included built-in machine learning since its inception. But now they’ve automated it so any Autonomous Data Warehouse customer can use it without any expertise. This makes other offerings seem rudimentary and primitive by comparison.”

“Oracle’s enhancements to Autonomous Data Warehouse are significant in three ways. First, it provides point-and-click user interfaces and machine learning automation, enabling non-professionals to generate actionable insights. Second, with this ease-of-use, even SMBs with small IT departments can get benefits from Oracle’s sophisticated cloud data warehouse. And, third, with Autonomous Data Warehouse, users can ingest data from any source from departmental systems to enterprise data warehouses, data lakes, and even from other clouds—AWS, Azure, and Google — and run diverse analytical workloads,” said Richard Winter, CEO and Principal Architect.  “All in all, Oracle is materially extending the reach of Autonomous Data Warehouse across users, organizations, and data access to multi-clouds. This transcends the barriers of what is possible today with AWS Redshift and Snowflake and any other cloud data warehouse on the planet.”

“KuppingerCole has recognized Oracle’s continued innovation in database technologies, naming Oracle Autonomous Database the Overall Leader in our Leadership Compass on Enterprise Databases in the Cloud last year,” said Alexei Balaganski, Lead Analyst, KuppingerCole Analysts. “Clearly, the company did not stop there. With the unveiling of the improved Autonomous Data Warehouse, Oracle continues to deliver on its vision to democratize data management, analytics, and security for organizations of any size or industry. These new features and enhancements allow every user to access any data and obtain insights close to real-time with intelligent self-service tools. The company’s ‘converged database’ approach ensures that all types of data are accessible at once, as opposed to the siloed nature of traditional analytics platforms. This helps businesses to avoid the exposure of sensitive information to unnecessary security and compliance risks.”

The post Next Generation of Oracle Autonomous Data Warehouse Available first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/next-generation-of-oracle-autonomous-data-warehouse-available/feed/ 0
simMachines’ New Platform Offers Dashboard to Location-based Brands https://ai-techpark.com/simmachines-new-platform-offers-dashboard-to-location-based-brands/ https://ai-techpark.com/simmachines-new-platform-offers-dashboard-to-location-based-brands/#respond Thu, 29 Oct 2020 18:00:00 +0000 https://ai-techpark.com/?p=10183 simMachines, Inc. (http://www.simMachines.com), the leader in Explainable AI / Machine Learning (XAI) applications, announced today the general availability of personifyAI, a subscription solution for location-based brands to remove the expensive and laborious effort needed to harness the benefits of location-based data. personifyAI leverages sophisticated AI and machine learning to quickly...

The post simMachines’ New Platform Offers Dashboard to Location-based Brands first appeared on AI-Tech Park.

]]>
simMachines, Inc. (http://www.simMachines.com), the leader in Explainable AI / Machine Learning (XAI) applications, announced today the general availability of personifyAI, a subscription solution for location-based brands to remove the expensive and laborious effort needed to harness the benefits of location-based data. personifyAI leverages sophisticated AI and machine learning to quickly and easily develop meaningful audience segments from location-based data and monitor their performance over time. Armed with these insights, marketers can identify, understand, and activate the customer segments that are most likely to visit the brand’s locations.

“Brands with physical locations already know the value of location-based data to understand customer traffic patterns in order to better understand and engage them,” said Robert Zieserl, CEO of simMachines. “With personifyAI, marketers now have a complete, AI-generated, custom segmentation solution to easily apply location-based data insights across the marketing campaign lifecycle. This is all brought together in a beautiful, intuitive dashboard so that marketers can spend more time harnessing the benefits—rather than dragged down by the analysis—of location-based data.”

personifyAI by simMachines’ brings a new approach for leveraging location-based data that avoids the two key problems that marketers face today: Customer data privacy and the complex, expensive, and cumbersome processing of the raw dataset before it can be used by practitioners. By delivering location-based data processed and split into meaningful, objective-driven segments, personifyAI unlocks the true potential for powerful marketing use cases.

“Before today, marketers had two choices: Building or buying their location-based data program,” says Mark Ailsworth, Chief Growth Officer of simMachines. “Building your own location data practice offers the flexibility for multiple use cases, but the downside is that your team does all of the heavy lifting. And buying location data already processed from vendors might be turnkey, but you only get a small slice of the potential that this dataset has for your business. personifyAI is the new option which gives marketers the best of both worlds: the ease of using it by buying it with the flexibility to process it for any use case you can imagine.”

AI-driven segmentation has become the industry standard, but with simMachines’ proprietary Explainable AI (XAI), personifyAI breaks open the black box normally associated with machine learning. Location-based marketers can actually see what data features drove the segments themselves. Transparency always yields better insights, and better insights drive better media planning, creative design, and overall campaign effectiveness. For example, a national casual dining chain wanted to better understand how to drive their lunchtime business, which they suspected was a very different crowd demographically than their evening and nighttime guests. personifyAI showed them that one of the most reliable and high-frequency lunch-going segments was construction workers working job sites within a half-mile radius of their store locations. That’s why Explainable AI matters.

Segmentation is also a powerful tool for using location data while protecting customers’ data privacy. simMachines’ ensures that individual customer identities can never be revealed by pooling visitors with similar key attributes.

Using personifyAI, marketers don’t need to be dragged down with the processing of raw location-based data, but rather can immediately:

  • Gain a better understanding of their location visitors via meaningful audience segments
  • Build competitive intel on who is visiting their locations and what other businesses they visit before/after visiting their locations
  • Plan media based on location-based insights
  • Target ads via location-based audience segments via personifyAI’s integrations with popular media buying platforms
  • Measure what program elements are and aren’t working to drive customer foot-traffic and sales to each unique location
  • Optimize media based on location-based measurement metrics on each segment
  • Use it for strategic market research, such as where to build the next store

“The true value of location-based data is how it can be combined with other datasets, such as a brand’s customer or loyalty data.” said Dave Jakopac, Chief Customer Officer of simMachines. “personifyAI is a turnkey, complete solution for marketers to layer in location-based data to their other proprietary or third-party data to create a unique, strategic asset that can be used across the marketing campaign lifecycle.”

The post simMachines’ New Platform Offers Dashboard to Location-based Brands first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/simmachines-new-platform-offers-dashboard-to-location-based-brands/feed/ 0