3 Issues with Data Management Tools

The market is currently awash with BI tools that advertise lofty claims regarding their ability to leverage data in order to ensure ROI. It is evident, however, that these systems are not created equally and the implementation of one could adversely affect an organization.


While consistent multifold increases of the digital universe is ushering in lower costs for data storage, a decline reported to be as much as 15-20 percent in the last few years alone, it is also the catalyst for the the rising cost of data management. It seems that the cause for concern regarding data storage does not lie in the storage technologies themselves, but in the increasing complexity of managing data. The demand for people with adequate skills within the realm of data management is not being sufficiently met, resulting in the need for organizations to train personnel from within. The efforts required to equip organizations with the skills and knowledge to properly wield these new data management tools demand a considerable portion of a firm’s time and money.


The increased capacity of a new data management system could be hindered by the existing environment if the process of integration is not handled with the proper care and supervision. With the introduction of a different system into a company’s current technological environment as well as external data pools( i.e. digital, social, mobile, devices, etc.), the issue of synergy between the old and new remains. CIO identifies this as a common oversight and advises organizations to remain cognizant of how data is going to be integrated from different sources and distributed across different platforms, as well as closely observe how any new data management systems operate with existing applications and other BI reporting tools to maximize insight extracted from the data.

Evan Levy, VP of Data Management Programs at SAS, shares his thoughts on the ideal components of an efficient data management strategy as well as the critical role of integration within this process, asserting that:

“If you look at the single biggest obstacle in data integration, it’s dealing with all of the complexity of merging data from different systems… The only reasonable solution is the use of advanced algorithms that are specially designed to support the processing and matching of specific subject area details. That’s the secret sauce of MDM (Master Data Management).”

Reporting Focus

The massive and seemingly unwieldy volume is one major concern amidst this rapid expansion of data, the other source of worry being that most of it is largely unstructured. Many data management tools offer to relieve companies of this issue by scrubbing the data clean and meticulously categorizing it. The tedious and expensive process of normalizing, structuring, and categorizing data does admittedly carry some informational benefit and can make reporting on the mass of data much more manageable. However, in the end, a lengthy, well-organized report does not guarantee usable business insight. According to research conducted by Gartner, 64% of business and technology decision-makers have difficulty getting answers simply from their dashboard metrics. Many data management systems operate mostly as a visual reporting tool, lacking the knowledge discovery capabilities imperative to producing actionable intelligence for the organizations that they serve.

The expenses that many of these data management processes pose for companies and the difficulties associated with integrating them with existing applications may prove to be fruitless if they are not able to provide real business solutions. Hence, data collection should not be done indiscriminately and the management of it conducted with little forethought. Before deciding on a Business Intelligence system, it is necessary to begin with a strategic business question to frame the data management process in order to ensure the successful acquisition and application of big data, both structured and unstructured.

Joe Sticca, Chief Operating Officer of True Interaction, contributed to this post.

By Justin Barbaro

Ensure Data Discovery ROI with Data Management

The explosion of data is an unavoidable facet of today’s business landscape. Domo recently released its fourth annual installment of its Data Never Sleeps research for 2016, illustrating the amount of data that is being generated in one minute on a variety of different platforms and channels. The astounding rate in which data has been growing shows no indication of slowing down, anticipating a digital universe saturated in nearly 44 trillion gigabytes of data by the year 2020. With data being produced in an increasingly unprecedented rate, companies are scrambling to set data management practices in place to circumvent the difficulties of being overwhelmed and eventually bogged down by the deluge of information that should be informing their decisions. There are a plethora of challenges with calibrating and enacting an effective data management strategy, and according to Experian’s 2016 Global Data Management Benchmark Report, a significant amount of these issues are internal.

Inaccurate Data

Most businesses strive for more data-driven insights, a feat that is rendered more difficult by the collection and maintenance of inaccurate data. Experian reports that 23% of customer data is believed to be inaccurate. While over half of the companies surveyed in this report attribute these errors to human error, a lack of internal manual processes, inadequate data strategies, and inadequacies in relevant technologies are also known culprits in the perpetuation of inaccurate data. While the reason for the erroneous input of data is still largely attributed to human oversight, it is the blatant lack of technological knowledge and ability that is barring many companies from leveraging their data, bringing us to our next point.

Data Quality Challenges

Inescapable and highly important, the sheer volume of information being generated by the second warrants a need for organizations to improve data culture. Research shows that businesses face challenges in acquiring the knowledge, skills, and human resources to manage data properly. This is reflective of organizations of all sizes and resources, not just large companies, as a baffling 94% of surveyed businesses admit to having experienced internal challenges when trying to improve data quality.

Reactive Approach

Experian’s data sophistication curve identifies four different levels of data management sophistication based on the people, processes, and technology associated with the data: unaware, reactive, proactive, and optimized. While the ultimate goal is ascending to the optimized level of performance, 24% of the polled businesses categorize their data management strategies as proactive, while the majority (42%) admits to merely reaching the reactive level of data management sophistication. The reactive approach is inefficient in many ways, a prominent one being the data management difficulties, both internal and external, espoused by waiting until specific issues with data crops up before addressing and fixing them as opposed to detecting and resolving such problems in a timely manner.

The most deleterious disadvantage of failing to address these pressing issues as they are detected is the careless neglect of invaluable business insight that is concealed in the mass of available data. Data management systems that have not optimized their operations will not be able to process data to produce relevant information in a timely manner. The lack of machine learning mechanisms within these sub-optimal systems will hinder businesses in their knowledge discovery process, barring organizations from making data-driven decisions in real time.

Denisse Perez, Content Marketing Analyst for True Interaction, contributed to this post.

by Joe Sticca

Wrangling Data for Compliance, Risk, and Regulatory Requirements

N.B. This article addresses the financial services industry, however, the insight and tips therein are applicable to nearly any industry today. ~EIC)

The financial services industry has always been characterized by its long list of compliance, risk, and regulatory requirements. Since the 2008 financial crisis, the industry is more regulated than ever, and as organizations undergo digital transformation and financial services customers continue to do their banking online, the myriad of compliance, risk, and regulatory requirements for financial institutions will only increase from here. In a related note, organizations are continuing to invest in their infrastructure to meet these requirements. IDC Financial Insights forecasts that the worldwide risk information technologies and services market will grow from $79 billion in 2015 to $96.3 billion in 2018.

All of this means reams of data. Financial firms by nature produce enormous amounts of data, and due to compliance requirements, must be able to store and maintain more data than ever before. McKinsey Global Institute reported in 2011 that the financial services industry has more digitally stored data than any other industry.

To succeed in todays financial industry, organizations need to take a cumulative, 3-part approach to their data:

1. Become masters at data management practices.

This appears obvious, but the vast amount of compliance, risk, and regulatory requirements necessitate that organizations become adept at data management. Capgemini identified 6 aspects to data management best practices:

Data Quality. Data should be kept optimal through periodic data review, and all standard dimensions of data quality– completeness, conformity, consistency, accuracy, duplication, and integrity must be demonstrated.

Data Structure. Financial services firms must decide whether their data structure should be layered or warehoused. Most prefer to warehouse data.

Data Governance. It is of upmost importance that financial firms implement a data governance system that includes a data governance officer that can own the data and monitor data sources and usage.

Data Lineage. To manage and secure data appropriately as it moves through the corporate network, it needs to be tracked to determine where it is and how it flows.

Data Integrity. Data must be maintained to assure accuracy and consistency over the entire lifecycle, and rules and procedures should be imposed within a database at the design stage.

Analytical Modeling. An analytical model is required to parcel out and derive relevant information for compliance.

2. Leverage risk, regulatory, and compliance data for business purposes.

There is a bright side to data overload; many organizations aren’t yet taking full advantage of the data they generate and collect. According to PWC, leading financial institutions are now beginning to explore the strategic possibilities of the risk, regulatory, and compliance data they own, as well as how to use insights from this data and analyses of it in order to reduce costs, improve operational efficiency, and drive revenue.

It’s understandable that in today’s business process of many financial institutions, the risk, regulatory, and compliance side of the organization do not actively collaborate with the sales and marketing teams. The tendency toward siloed structure and behavior in business make it difficult to reuse data across the organization. Certainly an organization can’t completely change overnight, but consider these tips below to help establish incremental change within your organization:

Cost Reduction: Eliminate the need for business units to collect data that the risk, regulatory, and compliance functions have already gathered, and reduce duplication of data between risk regulatory, compliance, and customer intelligence systems. Avoid wasted marketing expenses by carefully targeting marketing campaigns based upon an improved understanding of customer needs and preferences.

Increased Operational Efficiency: Centralize management of customer data across the organization. Establish a single source of truth to improve data accuracy. Eliminate duplicate activities in the middle and back office, and free resources to work on other revenue generating and value-add activities.

Drive Revenue: Customize products based upon enhanced knowledge of each customer’s risk profile and risk appetite. Identify new customer segments and potential new products through better understanding of customer patterns, preferences, and behaviors. Enable a more complete view of the customer to pursue cross-sell and up-sell oppportunities.

3. Implement a thorough analytics solution that provides actionable insight from your data.

Today, it’s possible for financial organizations to implement an integrated Machine Learning component that runs in the background, that can ingest data of all types from any number of people, places, and platforms, intelligently normalize and restructure it so it is useful, run a dynamic series of actions based upon data type and whatever specified situations contexts your business process is in, and create dynamic BI data visualizations out-of-the-box.

Machine Learning Platforms like SYNAPTIK enable organizations to create wide and deep reporting, analytics and machine learning agents without being tied to expensive specific proprietary frameworks and templates, such as Tableau. SYNAPTIK allows for blending of internal and external data in order to produce new valuable insights. There’s no data modeling required to drop in 3rd party data sources, so it is even possible to create reporting and insight agents across data pools.

By Michael Davison

Achieving Continuous Business Transformation Through Machine Learning

In the past, I’ve blogged about applying Agile methodology to businesses at large: what we are seeing in business today is that the concept of “business transformation” isn’t something that is undergone once or even periodically – business transformation is becoming a continuous process.

Today, businesses at large – not just their creative and development silos – benefit from operating in an Agile manner, most importantly in the area of responding to change over following a plan. Consider the words of Christa Carone, chief marketing officer for Xerox:

Where we are right now as an enterprise, we would actually say there is no start and stop because the market is changing, evolving so rapidly. We always have to be aligning our business model with those realities in the marketplace.”

This is an interesting development, but how, technically, can businesses achieve True Interaction in a continually transforming world?

Business Transformation: Reliance upon BI and Analytics

In order for an organization’s resources to quickly and accurately make decisions regarding a product, opportunity or sales channel, they must rely upon historic and extrapolated data, provided by their organization’s company’s Data Warehouse/Business Analytics group.

In a 2016 report by ZS Associates that interviewed 448 senior executives and officials across a myriad of industries, 70% of the respondents replied that sales and marketing analytics is already “very important” or “extremely important” to their business’ competitive advantage. Furthermore the report reveals that in just 2 years time, 79% of respondents expect this to be the case.

However, some very interesting numbers reveal cracks in the foundation: Only 12% of the same respondents could confirm that their organization’s BI efforts are able to stay abreast of the continually changing industry landscape. And only 2% believe business transformation in their company has had any “broad, positive impact.”

5 Reasons for lack of BI impact within an organization

1) Poor Data Integration across the business

Many Legacy BI systems include a suite (or a jumbled mess) of siloed applications and databases: There might be an app for Production Control, MRP, Shipping, Logistics, Order Control, for example, with corresponding databases for Finance, Marketing, Sales, Accounting, Management Reporting, and Human Resources – in all, creating a Byzantine knot of data hookups and plugins that are unique, perform a singular or limited set of functions, and are labor intensive to install, scale and upgrade.

2) Data collaboration isn’t happening enough between BI and Business Executives

Executives generally don’t appear to have a firm grasp of the pulse of their BI: only 41% of ZS Associates’ report participants thought that a collaborative relationship between professionals working directly with data analytics and those responsible for business performance exists at their company.

3) Popular Big Data Solutions are still siloed

Consider Ed Wrazen’s critique of Hadoop: During an interview at Computing’s recent Big Data and Analytics Summit, the Vice-President of Product Management at data quality firm Trillium Software revealed:

“My feeling is that Hadoop is in danger of making things worse for data quality. It may become a silo of silos, with siloed information loading into another silo which doesn’t match the data that’s used elsewhere. And there’s a lot more of it to contend with as well. You cannot pull that data out to clean it as it would take far too long and you’d need the same amount of disk storage again to put it on. It’s not cost-effective or scalable.”

4) Data Integration is still hard to do.

Only 44% of ZS Associates’ report ranked their organizations as “good” or “very good” at data aggregation and integration. 39% said that data integration and preparation were the biggest challenges within the organization, while 47% listed this as one of the areas in their organization where its improvement would produce the most benefit.

5) Only part of the organization’s resources can access BI

Back in the day, BI used to be the sole province of data experts in IT or information analyst specialists. Now companies are seeing the benefits of democratizing the access and analysis of data across the organization. Today, a data analyst could be a product manager, a line-of-business executive, or a sales director. In her book Successful Business Intelligence, Cindi Howson, author and instructor for The Data Warehousing Institute (TDWI) famously remarked:

“To be successful with BI, you need to be thinking about deploying it to 100% of your employees as well as beyond organizational boundaries to customers and suppliers… The future of business intelligence centers on making BI relevant for everyone, not only for information workers and internal employees, but also beyond corporate boundaries, to extend the reach of BI to customers and suppliers.”

Business leaders should examine these symptoms in the context of their own organizations.

Is there a solution to these issues?

True Interaction CEO O. Liam Wright has a novel approach to a new kind of BI solution. One that involves machine learning.

“In my 20 years in the business I’ve discovered several worlds in business that never spoke to each other properly, due to siloed information spaces, communications, platforms and people. Today’s fluid world necessitates changing the BI game completely: If you want to have a high level of true interaction between your systems, platforms, customers, internal multiple hierarchical department structures, then you NEED to flip the game around. SQL-based Dashboards are old news; they are so 2001.

You can’t start with a structured, SQL based situation that inevitably will require a lot of change over time – organizations don’t have the IT staff to continually support this kind of situation – it’s too expensive.”

Instead Liam took a different approach from the beginning:

I thought, what if we capitalized on the sheer quantity and quality of data today, and captured data in unstructured (or structured) formats, and put this data into a datastore that doesn’t care what type of data it is. Then – as opposed to expensive rigid, SQL-based joins on data types, instead we implement lightweight “builds” on top of the data. These lightweight builds enable businesses to start creating software experiences off of their base dataset pretty quickly. They also enable organizations to get Business Intelligence right out of the box as soon as they perform a build – dynamic dashboards and data visualizations which can be come much more sophisticated over time, as you pull in and cross-pollinate more data. Then, when the data is further under control, you can create software agents that can assist you in daily processes of data, or agents that get those controls out the door.

What is this BI and Machine Learning marriage, exactly?

So what exactly is Liam describing? Modern Business has just crossed over the threshold into an exciting new space – organizations can routinely implement an integrated Machine Learning component that runs in the background, that can ingest data of all types from any number of people, places, and platforms, intelligently normalize and restructure it so it is useful, run a dynamic series of actions based upon data type and whatever specified situations contexts your business process is in, and create dynamic BI data visualizations out-of-the-box.

True Interaction’s machine learning solution is called SYNAPTIK.

SYNAPTIK involves 3 basic concepts:

DATA: SYNAPTIK can pull in data from anywhere. Its user-friendly agent development framework can automate most data aggregation and normalization processes. It can be unstructured data, from commerce, web, broadcast, mobile, or social media. It can be audio, CRM, apps, or images. It also can pull in structured data for example, from SAP, Salesforce, Google, communications channels, publications, Excel sheets, or macros.

AGENT: A software agent is a program that acts for a user or other program in a relationship of agency, to act on one’s behalf. Agents can be configured to not only distribute data intelligence in flexible ways but also directly integrate and take action in other internal and external applications for quicker transformation to enhance your business processes and goals.

An Agent is composed of two parts: The operator and the controls. Think of an operator as the classic Telephone Operator in the 1940s that manually plugged in and unplugged your calls in the background. SYNAPTIK enables you to see how the operator works:

Operators can be written in several forms, such as Javascript, PHP / cURL, or Python. Organizations can write their own operators, or True Interaction’s development team can write them for you. An Agent also gives the user a control interface – a form field, or drag and drop functionality, in order to add specific assets, or run any variety of functions. In addition SYNAPTIK makes it easy to connect to a REST API, enabling developers to write their own own software on top of it.

BUILD: A build simply brings the DATA & AGENT components together, ultimately to enable you to better understand your organizations various activities within your data space.

A new model of BI: What is the Return?

– Machine Learning Platforms like SYNAPTIK enables organizations to create wide and deep reporting, analytics and machine learning agents without being tied to expensive specific proprietary frameworks and templates, such as Tableau. SYNAPTIK allows for blending of internal and external data in order to produce new valuable insights. There’s no data modeling required to drop in 3rd party data sources, so it is even possible to create reporting and insight agents across data pools.

– With traditional methods, the process of data normalization requires hours and hours of time – indeed the bulk of time is spent on this today. This leaves very little time for deep analysis and no time for deep insight. With SYNAPTIK, what takes 400 monthly hours to data manage now takes 2 minutes, opening up 399.99 hours of analysis, discovery, and innovation time to deliver the results you need.

– Not only is it possible create your own custom reports/analytic agents, SYNAPTIK enables organizations to share their reporting agents for others to use and modify.

– The inherent flexibility of SYNAPTIK enables businesses to continually provide new data products/services to their customers.

– Not so far down the line: Establishment of The Synaptik Marketplace, where you can participate, monetize, and generate additional revenue by allowing others to subscribe to your Agents and/or Data.

All of these returns contribute to not only augmenting organizational leadership and innovation throughout its hierarchy, but also producing incredibly valuable intelligence monetization, break-thru revenue, as well as improved “client stickiness” with the rollout of new data products and services. And, best of all, it puts businesses into a flexible data environment that does, quite elegantly, enable continuous transformation as industries, markets, and data landscapes continue to change.

We’ve got the Experience you Need

True Interaction has logged 100,000’s of hours of research, design, development, and deployment of consumer products and enterprise solutions. Our services directly impact a variety of industries and departments through our deep experience in producing critical business platforms.

We Can Integrate Anything with… Anything
Per the endless variable demands of each of our global clients, TI has seen it all, and has had to do it all. From legacy systems to open source, we can determine the most optimal means to achieve operational perfection, devising and implementing the right tech stack to fit your business. We routinely pull together disparate data sources, fuse together disconnected silos, and do exactly what it takes for you to operate with tight tolerances, making your business engine hum. Have 100+ platforms? No problem. Give us a Call.

by Michael Davison

Can Blockchain help Media’s Data Challenges?

There has been a lot of discussion around blockchain and its framework in the financial services world. However in our organization’s continuing research as well as operations with our clients we are beginning to uncover specific opportunities that span into the media space. We feel blockchain can and should serve as a universal and cost effective foundation to address data management and insight decision management – regardless of organizational size, output, or industry.

There have always been three distinct challenges in our collective experiences here at True Interaction:

1. Data aggregation and normalization:

With more and more linear and digital channel fragmentation, the process of aggregating and normalizing data for review and dissemination has weighed down efficient insight decision making. One way to look at it is: you spend 40-60% of your time aggregating and normalizing data from structured and unstructured data types (PDFs, emails, word docs, csv files, API feeds, video, text, images, etc.) across a multitude of sources (media partners, internal systems, external systems & feeds, etc.). This leaves little time for effective review and analysis – essential to determining any insight.

2. Data review, reporting, dashboards:

Once you have your data normalized from your various sources you can then move to building and producing your reporting dashboards for review and scenario modeling. This process is usually limited to answering questions you already know.

3. Insight and action

Actionable insight is usually limited, usually because the most time and resources are allocated to the above two steps. This process should entail a review of your data systems: are the providing you with insights outside the questions you already know? In addition, where and how can “rules” or actions into other processes and systems be automated? Today the are plenty of business situations that exist where manual coordination and communications are still needed or utilized in order to take action – thereby obviating any competitive edge to execute quickly.

Blockchain can provide an efficient and universal data layer to serve your business intelligence tool set(s). This can help consolidate internal data repositories when aggregating and normalizing data. It can also be the de-facto ledger for all data management activities as well as the database of record to ease and minimize management of propriety data platforms and processes.
The following are additional resources:

How will Blockchain will Transform Media and Entertainment


Blockchain technology: 9 benefits & 7 challenges


Blockchain just fixed the Internet’s worst attribute


By Joe Sticca

The Changing Terrain of Media in the Digital Space

The rapid digitization of the media industry does not merely address the immediate needs posed by the market, but also anticipates the constantly changing consumer behavior and rising expectations of an increasingly digital customer. The World Economic Forum points to a growing middle class, urbanization, the advent of tech savvy millennials demanding instantaneous access to content on a variety of platforms, and an aging world population that is invariably accompanied by the need for services designed for an older audience as the most pronounced demographic factors that are currently contributing to the reshaping of the media landscape. The expanding list of accommodations that customers are coming to expect from the media industry more or less fall within the realms of the accessibility and personalization of content.

The Path to Digital Transformation

Average weekday newspaper circulation has been on a steady decline, falling another 7% in 2015 according to the most recent Pew Research Center report. This inevitable dwindling of interest in print publications could be ascribed to the rising demand for media companies to adopt a multi-channel strategy that enables the audience to access content across different platforms. Companies remedy their absence of a formidable digital presence in a variety of ways. One of the most common resolutions that companies have resorted to involve redesigning their business model by bundling print subscriptions with mobile device access, a measure enacted to address the 78% of consumers who view news content on mobile browsers. A more radical approach could be opting for a complete digital transformation, a decision reached by The Independent earlier this year when it became the “first national newspaper title to move to a digital-only future.” The appeal of having information become readily available on any screen of the customer’s choosing is magnified by the expectation of uniformity and equally accessible and engaging user interfaces across all devices. Of course, convenience to the customer does not only rely on their ability to access content on the platform of their choice, but also at any point they desire, hence the focus on establishing quick response times and flexibility of content availability.

Another expectation that consumers have come to harbor aside from unhindered access to content: the minimization, if not the complete elimination of superfluous information. According to the 2016 Digital News Report by the Reuters Institute, news organizations, such as the BBC and the New York Times, are striving to provide more personalized news on their websites and applications. In some cases, people are offered information and clips on topics in which they have indicated an interest. Additionally, companies are also employing a means of developing “auto-generated recommendations based in part on the content they have used in the past.” Transcending written material, streaming platforms like Pandora and Netflix utilize Big Data in order to analyze and discern the characteristics and qualities of an individual’s preferences, thus feeding information into a database that then determines content using predictive analytics that the individual would be predisposed to enjoying. In previous blog posts, we have divulged the value of understanding Big Data, emphasizing how execution based on the insight gleaned from Big Data could be as crucial to a company’s profitability as the insight itself. As evidenced by this growing practice of collecting consumer data in order to cultivate personalized content for consumers, it is obvious that the media industry has not been remiss in its observation of the discernible success that data-driven companies boast relative to competitors that are not as reliant on data. Finally, perhaps as equally satisfying as being able to browse through personalized, recommended content based on one’s past likes and preferences is the exclusion of repetitive content, as informed by one’s viewing history.

Media companies embrace their ascent into digital space in a plethora of ways. Some elect for a complete digital transformation, conducting a substantial part if not all of their business within browsers and applications rather than in print. There are also those that focus on enhancing the customer experience by maintaining contact with consumers through all touch points and following them from device to device, all the while gathering data to be used in optimizing the content provided. Another means through which media companies are realizing their full digital potential is through the digitizing of their processes and operations. These businesses are initiating a shift towards digital products; a decision that is both cost-effective (cutting costs up to 90% on information-intensive processes) and can bolster the efficacy of one’s data mining efforts. Warner Bros was one of the first in the industry to transform the ways of storing and sharing content into a singled, totally integrated digital operation that began with Media Asset Retrieval System (MARS). This innovative digital asset management system ushered in a transformation that effectively lowered Warner Bros’ distribution and management costs by 85%.

A Glimpse into the Future

So what’s next in this journey to digital conversion? According to the International News Media Association (INMA), all roads lead to the Internet of Things (IoT). By 2018, the Business Insider Intelligence asserts that more than 18 billion devices will be connected to the Web. The progression into this new era of tech where information can be harvested from the physical world itself will not go unobserved by the media industry. Media companies are tasked with having to evolve beyond the screen.

Mitch Joel, President of Mirium Agency, writes:

“Transient media moments does not equal a strong and profound place to deliver an advertising message… the past century may have been about maximizing space and repetition to drive brand awareness, but the next half century could well be about advertising taking on a smaller position in the expanding marketing sphere as brands create loyalty not through impressions but by creating tools, applications, physical devices, true utility, and more robust loyalty extensions that makes them more valuable in a consumer’s life.”

Big Data anchors the efforts into the Digital Age and the IoT will provide new, vital networks of information to fortify this crusade.
Contact our team to learn more about how True Interaction can develop game-changing platforms that cut waste and redundancy as well as boost margins for your media company.

By Justin Barbaro

Big Data: Trends in the Education Sector

Our recent blog article highlighted 6 things to keep in mind when optimizing Big Data for your company. An essential component highlighted by Michael Davison, co-founder and Editor-in-chief of True Interaction, is the understanding that data analytics provides AN answer, rather than THE answer.

This concept resonates in the education sector, with the U.S. Department of Education calling the use of student data systems to improve education a “national priority.” Teachers are inundated with data points that quantify formative assessment results, parent call logs, absences, time-on-task, observations and more. But as with any sector, what matters most in education is how you use data, rather than that you have it.

Pasi Sahlberg, a Finnish educator, author and scholar, wrote:

“Despite all this new information and benefits that come with it, there are clear handicaps in how big data has been used in education reforms. In fact, pundits and policymakers often forget that Big data, at best, only reveals correlations between variables in education, not causality.”

Despite pronouncements such as this from key education reformers, in the ed-tech industry, big data and analytics are exceedingly prolific. A multitude of companies collect and analyze information on how students interact with digital content.

In the United States, almost all teachers (93 percent) use some form of digital tool to guide instruction. But more than two-thirds of teachers (67 percent) say they are not fully satisfied with the effectiveness of the data or the tools for working with data that they have access to (Gates Foundation, 2015).

The majority of school districts (70 percent) report having had an electronic student information system providing access to enrollment and attendance data for six or more years, and more recently, districts have begun acquiring electronic data systems:

– 79% report having an assessment system that organizes and analyzes benchmark assessment data

– 77% report having a data warehouse that provides access to current and historical data on students

– 64% report having an instructional or curriculum management system

Despite this proliferation in the use of data systems, the Gates Foundation found that there are additional barriers that prevent full implementation of data systems in schools.

Because of these barriers, teachers say data are often “siloed” and difficult to work with, inflexible, and unable to track student progress over time. How can school and district leaders optimize the data systems to impact student performance?

1. Involve teachers in data analysis.

Often, teachers are seen as the data “collectors” while school-based and district leaders are tasked with analysis, synthesis, and recommendations. This might contribute to the slow modification of classroom practices in response to data. With teachers left out of the data analysis, there is a significant barrier to classroom integration. It is important to recognize that in terms of promoting student growth, teachers know best the strategies and methods to employ. The disconnect between teachers and the top-down approach to data use in schools has created a false narrative that teachers are unmotivated and disinterested in employing data-driven instruction in their classrooms. In reality, 78 percent of teachers believe that data can validate where their students are and where they can go. District leaders and product developers can harness this desire to integrate data to provide customized solutions, tailored to grade level, content area, and demographics.

2. Invest in professional development of staff to integrate tools and practice.

Similar to our first recommendation, it is important that education leaders invest resources – financial, human capital, and time – to the development of teachers’ capacity to fully utilize and customize lessons based on student data.

Various research studies have found that those who participate in professional development programs that include coaching/mentoring are more likely to deploy new instructional strategies in the classroom. Effective professional development that truly enables teachers to integrate data systems is continuous and ongoing. Discrete training sessions to show teachers how to use specific hardware/software tools are important, but truly integrative professional development should go further by providing ongoing support and on-the-job training on how to collect/analyze data and how to adjust teaching in response to data analytics. Fishman (2006) noted that learning how to use technology is not the same as learning how to teach with technology.

3. Promote the use of personalized learning.

69 percent of teachers surveyed by the Gates Foundation believe that improving student achievement depends on tailoring instruction to meet individual students’ needs. Connecting data from multiple sources across a student’s academic, social, behavioral, and emotional experiences may help teachers gain a fuller picture of each student. Schools that adopt a learner‐centered pedagogy tend to experience greater integration and more effective use of technology in the classroom.

4. Work with product developers or vendors who can conduct full analysis of teachers’ needs when designing or optimizing data systems.

Over 60 percent of districts reported that lack of interoperability across data systems was a barrier to expanded use of data-driven decision making. True Interaction produces custom full-stack end-to-end technology solutions across web, desktop and mobile, integrating multiple data sources to create a customized data solution. True Interaction can determine the most optimal means to achieve operational perfection, devising and implementing the right tech stack to fit the specific school and or district need. True Interaction pulls together disparate data sources, fuses together disconnected silos, and does exactly what it takes for school data systems to operate with high levels of efficiency and efficacy, ultimately leading to improved student achievement outcomes.
Contact our team to learn more about how we can optimize your school or district data system.

Joe Sticca, Chief Operating Officer of True Interaction, contributed to this post.

by Jessica Beidelman

6 Protips on Getting the Most out of Your Big Data

Here’s some interesting news: a recent Teradata study showed a huge correlation between a company’s tendency to rely on data when making decisions, and its profitability and ability to innovate. According to the study, data-driven companies are more likely to generate higher profits than competitors who report a low reliance on data. Access to data and quantitative tools that convert numbers to insights are two to three times more common in data-centric companies – as well as being much more likely to reap the benefits of data initiatives, from increased information sharing, to greater collaboration, to better quality and speed of execution.

Today Big Data is a big boon for the IT industry; organizations that gather significant data are finding new ways to monetize it, while the companies that deliver the most creative and meaningful ways to display the results of Big Data analytics are lauded, coveted, and sought after. But for certain, Big Data is NOT some magic panacea that, when liberally applied to a business, creates giant fruiting trees of money.

First let’s take a look at some figures that illustrate how BIG “Big Data” is.

– A few hundred digital cameras together have enough combined memory to store the contents of every printed book in the Library of Congress.

– Just 10 minutes of world email content is, again, the contents of every printed book in the Library of Congress. Thats equal to 144x the contents of the Library of Congress every day.

Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone.

Only 3% of potentially useful data is tagged, and even less is analyzed.

In 2010 there was 1 trillion gigabytes of data on the Internet; that number being predicted to double each year, reaching 40 trillion gigabytes by the year 2020.

The sheer size of large datasets force us to come up with new methods for analysis, and as more and more data is collected, more and more challenges and opportunites will arise.

With that in mind, lets examine, 6 things to keep in mind when considering Big Data.

1. Data analytics gives you AN answer, not THE answer.

In general, data analysis cannot make perfect predictions; instead, it might make predictions better than someone usually could without it. Also, unlike math, data analytics does not get rid of all the messiness of the dataset. There is always more than one answer. You can glean insights from any system that processes data and outputs an answer, but it’s not the only answer.

2. Data analytics involves YOUR intuition as a data analyst.

If your method is unsound, then the answer will be wrong. In fact, the full potential of quantitative analytics can be unlocked only when combined with sound business intuition. Mike Flowers, chief analytics officer for New York City under Mayor Bloomberg, explained the fallacy behind either-or thinking as such: “Intuition versus analytics is not a binary choice. I think expert intuition is the major missing component of all the chatter out there about analytics and being data driven.”

3. There is no single best tool or method to analyze data.

There are two general kinds of data, however not all analytics will necessarily include both, and as you might expect, they need to be analyzed differently.

Quantitative data refer to the information that is collected as, or can be translated into, numbers, which can then be displayed and analyzed mathematically. It can be processed using statistical methods such as calculating the mean or average number of times an event or behavior occurs over a unit of time.

Because numbers are “hard data” and not subject to interpretation, these methods can give nearly definitive answers to different questions. Various kinds of quantitative analysis can indicate changes in a dependent variable related to frequency, duration,intensity, timeline, or level, for example. They allow you to compare those changes to one another, to changes in another variable, or to changes in another population. They might be able to tell you, at a particular degree of reliability, whether those changes are likely to have been caused by your intervention or program, or by another factor, known or unknown. And they can identify relationships among different variables, which may or may not mean that one causes another. http://ctb.ku.edu/en/table-of-contents/evaluate/evaluate-community-interventions/collect-analyze-data/main

Qualitative data are items such as descriptions, anecdotes, opinions, quotes, interpretations, etc., and are generally either not able to be reduced to numbers, or are considered more valuable or informative if left as narratives. Qualitative data can sometimes tell you things that quantitative data can’t., such as why certain methods are working or not working, whether part of what you’re doing conflicts with participants’ culture, what participants see as important, etc. It may also show you patterns – in behavior, physical or social environment, or other factors – that the numbers in your quantitative data don’t, and occasionally even identify variables that researchers weren’t aware of. There are several different methods that can be used when analyzing qualitative data:

Content Analysis: In general, start with some ideas about hypotheses or themes that might emerge, and look for them in the data that you have collected.

Grounded Analysis: Similar to content analysis in that it uses similar techniques for coding, however you do not start from a defined point. Instead, you allow the data to ‘speak for itself’, with themes emerging from the discussions and conversations.

Social Network Analysis: Examines the links between individuals as a way of understanding what motivates behavior.

Discourse Analysis: Which not only analyses conversation, but also takes into account the social context in which the conversation occurs, including previous conversations, power relationships and the concept of individual identity.

Narrative Analysis: Looks at the way in which stories are told within an organization, in order to better understand the ways in which people think and are organized within groups.

Conversation Analysis: Is largely used in ethnographic research, and assumes that conversations are all governed by rules and patterns which remain the same, whoever is talking. It also assumes that what is said can only be understood by looking at what happened both before and after.

Sometimes you may wish to use one single method, and sometimes you may want to use several, whether all one type or a mixture of Quantitative or Qualitative data. Remember to have a goal or a question you want to answer – once you know what you are trying to learn, you can often come up with a creative way to use the data. It is your research, and only you can decide which methods will suit both your questions and the data itself. Quicktip: Make sure that the method that you use is consistent with the philosophical view that underpins your research, and within the limits of the resources available to you.

4. You do not always have the data you need in the way that you need it.

In a 2014 Teradata study, 42% of respondents said they find access to data cumbersome and not user-friendly. You might have the data, but format is KEY: it might be rife with errors, incomplete, or composed of different datasets that have to be merged. When working with particularly large datasets, oftentimes the greatest timesink – and the biggest challenge – is getting it into the form you need.

5. Not all data is equally available.

Sure, some data may exist free and easy on the Web, but more often than not, the sheer volume, velocity, or variety prevents an easy grab. Furthermore, unless there is an existing API or a vendor makes it easily accessible by some other means, you will ultimately need to write a script or even complex code to get the data the way you want it.

6. While an insight or approach adds value, it may not add enough value.

In Broken Links: Why analytics investments have yet to pay off, from the Economist Intelligence Unit (EIU) and global sales and marketing firm ZS, found that although 70% of business executives rated sales and marketing analytics as “very” or “extremely important”, just 2% are ready to say they have achieved “broad, positive impact.”

In 2013, The Wall Street Journal reported that 44% of information technology professionals said they had worked on big-data initiatives that got scrapped. A major reason for so many false starts is that data is being collected merely in the hope that it turns out to be useful once analyzed. This type of behavior is putting the cart before the horse, and can be disastrous to businesses – again, remember to have a goal or question you want to answer.

Not every new insight is worth the time or effort to integrate it into existing systems. No insight is totally new. If every insight is new, then something is wrong.

Hopefully these tips will set you off in the right direction when you are considering to incorporate additional datasets and their associated analytics platforms into your business process. Good Luck!

By Michael Davison