Enterprise Big Data PREDICTIONS 2016

bd2016.pngCompanies big and small are finding new ways to capture and use more data. The push to make big data more mainstream will get stronger in 2016. Here Are Oracle’s Top 10 Predictions:
(1) Data Civilians Operate More and More Like Data Scientists.
While complex statistics may still be limited to data scientists, data-driven decision-making shouldn’t be. In the coming year, simpler big data
discovery tools will let business analysts shop for data sets in enterprise Hadoop clusters, reshape them into new mashup combinations,
and even analyze them with exploratory machine learning techniques. Extending this kind of exploration to a broader audience will both
improve self-service access to big data and provide richer hypotheses and experiments that drive the next level of innovation.(see video: Self-service and Data discovery in the age of Big Data Analytic )


(2) Experimental Data Labs Take Off.
With more hypotheses to investigate, professional data scientists will
see increasing demand for their skills from established companies.
For example, banks, insurers, and credit-rating firms will turn to
algorithms to price risk and guard against fraud more effectively.
But many such decisions are hard to migrate from clever judgments
to clear rules. Expect a proliferation of experiments default risk,
policy underwriting, and fraud detection as firms try to identify
hot spots for algorithmic advantage faster than the competition.(see video: Capitalize on Big Data in the Financial Services Industry )


(3) DIY Gives Way to Solutions.
Early big data adapters had no choice but to build their own big data
clusters and environments. But building, managing and maintaining
these unique systems built on Hadoop, Spark, and other emerging
technologies is costly and time-consuming. In fact, average build time
is six months. Who can wait that long? In 2016, we’ll see technologies
mature and become more mainstream thanks to cloud services and
appliances with pre-configured automation and standardization.

(Whitepaper: The Surprising Economics of Engineered Systems for BigData )


(4) Data Virtualization Becomes a Reality.
Companies not only capture a greater variety of data, they use
it in a greater variety algorithms, analytics, and apps. But developers
and analysts shouldn’t have to know which data is where or get stuck
with just the access methodologies that repository supports. Look for
a shifting focus from using any single technology, such as NoSQL,
Hadoop, relational, spatial or graph, to increasing reliance on data
virtualization. Users and applications connect to virtualized data,
via SQL, REST and scripting languages. Successful data virtualization
technology will offer performance equal to that of native methods,
complete backward compatibility and security.(see Video: One Fast Query on all of your Data )


(5) Dataflow Programming Opens The Floodgates.
Initial waves of big data adoption focused on hand coded data
processing. New management tools will decouple and insulate the
big data foundation technologies from higher level data processing
needs. We’ll also see the emergence of dataflow programming which
takes advantage of extreme parallelism, provides simpler reusability
of functional operators, and gives pluggable support for statistical
and machine learning functions.

( see video: Data Integration to Convert Big Data into Tangible Decisions )


(6) Big Data Gives Artificial Intelligence
Something to Think About.
2016 will be the year where Artificial Intelligence (AI) technologies
such as Machine Learning (ML), Natural Language Processing (NLP)
and Property Graphs (PG) are applied to ordinary data processing
challenges. While ML, NLP and PG have already been accessible
as API libraries in big data, the new shift will include widespread
applications of these technologies in IT tools that support
applications, real-time analytics and data science.

( whitepaper: Thriving in the age of Big Data Analytics and Self-service )


(7) Data Swamps Try Provenance to Clear Things Up.
Data lineage used to be a nice-to-have capability because so much
of the data feeding corporate dashboards came from trusted data
warehouses. But in the big data era data lineage is a must-have
because customers are mashing up company data with third-party
data sets. Some of these new combinations will incorporate
high-quality, vendor-verified data. But others will use data that’s not
officially perfect, but good enough for prototyping. When surprisingly
valuable findings come from these opportunistic explorations,
managers will look to the lineage to know how much work
is required to raise it to production-quality levels.

( whitepaper: Defining and Implementing a Pragmatic Data Governance Process )


(8) IoT + Cloud = Big Data Killer App.
Big data cloud services are the behind-the-scenes magic of the
Internet of Things (IoT). Expanding cloud services will not only
catch sensor data but also feed it into big data analytics and
algorithms to make use of it. Highly secure IoT cloud services
will also help manufacturers create new products that safely
take action on the analyzed data without human intervention.

( Infograph: Energize your business with IoT Enabled Applications )


(9) Data Politics Drive Hybrid Cloud.
Knowing where data comes from—not just what sensor or system,
but from within which nation’s border—will make it easier for
governments to enforce national data policies. Multinational
corporations moving to the cloud will be caught between
competing interests. Increasingly, global companies will move
to hybrid cloud deployments with machines in regional data centers
that act like a local wisp of a larger cloud service, honoring both
the drive for cost reduction and regulatory compliance.

( Blog: Big Data and the Future of Privacy )


(10) New Security Classification Systems
Balance Protection with Access.
Increasing consumer awareness of the ways data can be collected,
shared, stored—and stolen—will amplify calls for regulatory
protections of personal information. Expect to see politicians,
academics and columnists grappling with boundaries and ethics.
Companies will increase use of classification systems that categorize
documents and data into groups with pre-defined policies for
access, redaction and masking. The continuous threat of ever
more sophisticated hackers will prompt companies to both
tighten security, as well as audit access and use of data.( whitepaper: Securing The Big Data life cycle. )bd4
The original document exists at ( click here )

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s