Categories
Insights

Transform & Innovate Legacy Technology

Resources ($) spent treading water

Transforming legacy technology remains a difficult proposition. A lack of historical expertise by current stakeholders is one impediment, as is transparency of soft costs within the technology budget. The full expenses of legacy infrastructure can remain hidden until confronted with a significant technology outage, or the sudden increase in the cost of maintenance. The average costs and resources of maintaining legacy systems and application averages approximately 30% of an organization’s IT budget, which include:

  • Maintenance
  • Talent (Human capital)
  • Internal and external compliance (i.e. GDRP, etc.)
  • Risk; Security threats and trends
  • Agility, scalability and stability

One of the most important factors in dealing with transforming legacy technology is facing the reality of your organization’s culture and alignment.

Where to start…

Begin by drawing a line around your monolithic systems, code and infrastructure. Often companies believe they must reengineer all their legacy systems from the ground up. This is a common error, however, it is critical to delineate which portions of the system can be inoculated or identified as ‘core’ and then build API or structures around that “core”.

Develop an integration strategy and then construct an integration layer. This means some code will be written from the foundational or infrastructure level, then to the database layer and finally to the user experience environment. It is critical to identify those systems which can be detethered and then “frozen”. This facilitates a phased integration approach, upon which additional functionality can be layered. Depending on the complexity of legacy architecture, these changes may be cost prohibitive, and so the ability to isolate, freeze and use a layered-build approach is an appropriate solution. This will permit an organization to stabilize their applications code and then build APIs or other integration layers around ‘frozen’ areas of the technology stack. In some circumstances, Block-chain can be very useful in providing a fast and simple way to place in an integration layer within the  legacy or ‘frozen’ environments.

Missing Link

The most important component of transformation and innovation is the people within the organization, not the technology or skillsets around the technology.  Industry studies indicate a potential for 20-30% increase in productivity and creative thought if the individuals are engaged and aligned with the organization’s goals, and the change processes align with individual goals and performance.  All departments and stakeholders must be in alignment from product, QA, development, and infrastructure to the end users. This is the most important aspect of any technology transformation initiative;  creating a safe and collaborative environment to facilitate “creative dissent’.

Categories
Insights

Three Digital Marketing Innovations for Higher Education

EDITOR’S NOTE: This article is about how higher education marketing teams can transform their current strategies using digital transformation. True Interaction built SYNAPTIK, our Data Management, Analytics, and Data Science Simulation Platform, specifically to make it easy to collect and manage data, including marketing data, for more meaningful insights. For more information or a demo, please visit us at https://synaptik.co/ or email us at hello@www.true.design.

Categories
Insights

Making Human Resources More Inclusive, Driven By Data

EDITOR’S NOTE: This article is about how to approach and think about leveraging data to make human resources more inclusive across organizations. True Interaction built SYNAPTIK, our Data Management, Analytics, and Data Science Simulation Platform, specifically to make it easy to aggregate siloed data (i.e. from Human Resources, Finance, etc.) for more meaningful data discovery.

We know that Machine Learning (ML) and Artificial Intelligence (AI) will transform the future of work itself, but how will it affect the processes by which organizations choose, develop and retain their workers?

Katherine Ullman is a data scientist at Paradigm, a strategy consulting firm using social science to make companies more inclusive. Paradigm has partnered with a range of clients,  including Airbnb, Pinterest, Asana, and Slack, among others. Katherine and I had a recent discussion about how her organization works with data and machine learning to assist clients in better understanding the impact of their people processes including recruitment, selection, training and retention of underrepresented groups.   

(NOTE: The word “impute” below is a technical term that refers to the assigning of a value by inference.)

TI: Being a data scientist at an human resources & strategy consultancy that leverages social science to make companies more inclusive sounds like an amazing job. Can you provide some insight into your daily work and the work you do with clients?

“One of the core services I work on as a data scientist is our comprehensive Diversity & Inclusion Assessment. (The assessment) is a multi-month project designed to identify barriers and design client-specific strategies for diversity and inclusion. We do that by collecting and analyzing both quantitative and qualitative data about the client’s people processes and the outcomes of those processes.”

“The first thing I am doing is understanding, cleaning, and linking that data…that’s a surprisingly large part of the process. Once we clean it, we analyze the data to understand how an organization attracts, selects, develops and retains its workforce with a particular focus on underrepresented groups. What we’re looking for is how important, people-related outcomes in an organization – things like who is hired, who is promoted, and who stays or leaves – might vary depending on your identity. At the same time, our consultants are doing the qualitative research, and then we synthesize those findings internally.”

“(Then) our learnings from (the data) shape the strategic recommendations that we offer to clients to, again, improve how they attract, select, develop and retain all employees, and particularly those from underrepresented groups. Clients often come to us with a sense that they have room to improve with respect to diversity and inclusion, but don’t know where to focus their attention or what to do once they determine that focus. Our analyses provide insight not only into where our clients should concentrate their efforts and also provide clarity around what solutions to implement. For example, a client might believe they have a problem attracting a diverse set of applicants, but we find that their applicant pool is relatively diverse and underrepresented people are simply falling out of the funnel early in the hiring process. We might have that client concentrate less on active sourcing, then, and instead focus on ensuring their early stage selection processes are fair.”

TI: How do you wrangle all the diverse, silo-ed data and organize it for your internal analysis purposes?

“Storage is not usually a difficult issue in terms of size, but there are obviously security concerns that we take incredibly seriously, as we deal with sensitive data.”

“R is our main wrangling tool and we use some of the really great open source packages developed by the R community that allow us to create interactive dashboards and crisp visualizations as a means of communicating back insights, both internally to other members of our team and to our clients as well.”

rawpixel-com-351765-1.jpg 

TI: How does machine learning impact your current work? How are you using it?

“We use some ML techniques to impute missing demographic data. It’s often in the applicant data from recruiting software systems where we have the most missing data in terms of demographics. Once we impute the missing data, we are able to ask, for example, ‘Here are people from different demographic groups, how are they entering the application pipeline? Are they coming in through referrals? Through the company’s job website? Or through third party boards or events?’  A lot of the companies we work with actually track this information at a granular level, making it easy to gain insight about who is entering the funnel through what sources, and how successful those sources are.”

TI: How do you see machine learning impacting your work five years from now?

“(Currently, Paradigm) is using existing tools primarily for imputation, and we aren’t pushing the envelope too far. At this stage, I think this is wise. Our work with clients has real outcomes on people and you need to really know and take seriously the implications are of what you are doing when you are using these new and exciting tools.”

“I think we are going to continue to see a lot AI and machine learning move into the HR space. There are examples of companies who are already using this well–like Textio–but I think it’s important to be both optimistic and suspicious about new technologies in this space. Do we want machine learning to make hiring decisions? One might argue that is going to remove bias from the process because it’s removes the need for human judgement, but at the same time you have to wonder, what is the data underlying these models? It is very difficult to find data that links characteristics of people and their employment outcomes that is free from human bias, so any machine learning that built on that data is likely to replicate those issues.”

“But there are reasons to be optimistic about the future of machine learning. For example, I am seeing a lot of work to actually diversify the machine learning industry. The  advocacy there is really important because people are starting to understand that who makes thee tools matters a lot.”

TI: What recommendations do you have for organizations who want to use data to understand and improve their current HR practices?

“A lot of (companies) already have recruiting software – applicant tracking systems – because even at small companies recruiting and hiring is such a heavy lift that most people find themselves looking into systems that will help them make that easier.”

“I think where companies can improve is really honoring the recording of data that isn’t auto-populated through every ATS (applicant tracking system). For example, in applicant data, taking the time to record every applicant that comes through a referral and who that referral is. This is really important to understand the success of hiring sources, especially for some of the larger companies we’ve worked with. Even if less than 3% of the applicant pool are referrals, we may end up finding that referrals comprise over 30% of hires. When companies really make sure that everyone getting referred is documented, we can feel confident and clear in our insights. Most companies are collecting this data in some way, but there’s a lot of variance in the quality of data that individuals need to record themselves.”

“I’m really looking forward to the development of more HRIS/ATS systems in this space that will streamline data collection and link various systems (performance, recruiting, internal surveys, etc). Until then, I think the (best) thing to do is to really honor the data collection process with an eye towards making it legible for other people internally or externally to use in the future. This will happen naturally as people analyst positions become more of a norm, but until then, I think people think of the data collection process as just a burden with no end goal. I get that, but if done well, it really gives us (and our clients) the opportunity to gain meaningful and accurate insights.”

Categories
Insights

Cybersecurity: Is Machine Learning the Answer?

EDITOR’S NOTE: This article is about how to approach and think about cybersecurity. True Interaction built SYNAPTIK, our Data Management, Analytics, and Data Science Simulation Platform, specifically to make it easy to collect and manage core and alternative data for more meaningful data discovery. For more information or a demo, please visit us at https://synaptik.co/ or email us at hello@www.true.design.

The U.S. Department of Homeland Security designated the current month of October as National Cybersecurity Awareness Month. Cybersecurity is currently at the forefront of many American’s minds following the sweeping data breach earlier this year at Equifax, a consumer credit reporting agency. The current total of Americans affected by the breach stands at 145.5 million; however, the data breach is arguably the greatest of all time for its depth in addition to its scale. Missing data from victims includes names, birthdays, addresses and Social Security numbers. Even more alarming, consumer’s security answers and questions may have also been breached, providing hackers and their clients the ability to lock victims out of their private accounts by altering passwords and other account settings.

But with advances in machine learning (ML) and artificial intelligence (AI) technologies to snuff out potential malware threats, shouldn’t business users be able to construct more robust fortresses for their data?

The answer is yes, but this optimism should be tempered as the potential is more limited than many might think.

Some brief definitions require clarification before exploring key limitations of applying machine learning and AI algorithms for cybersecurity measures.

Machine Learning is the ability for computer programs to analyze big data, extract information automatically and learn from it.1

Artificial Intelligence is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings such as the ability to reason, discover meaning, generalize or learn from past experience.2

Cybersecurity is the practice of protecting systems, networks and programs from digital attacks. These attacks are usually aimed at accessing, changing, or destroying sensitive information, extorting money from users or interrupting normal business processes.3

So why should we temper our expectations when deploying machine learning algorithms for cybersecurity needs?

Machine Learning is not AI. Machine learning has amazing capabilities including the ability to analyze and learn from big data sets. However, the ability to learn should not be confused with the ability to reason or self-reflect, two characteristics of AI and humans. Therefore:

Machine Learning Will Identify Data Anomalies Better than Malware. Machine Learning algorithms can become very adept at identifying anomalies in large data sets, especially if they have a generous amount of training data. However, spotting threats becomes infinitely more complex if the algorithm must determine good from bad anomalies as well as unforeseen randomness. As a result:

Machine Learning Will Likely Produce Excessive False Positives. These false positives of malware identification could represent over 99% of anomalies. Regardless, any surfacing anomalies may require human-follow up, quickly zapping limited cybersecurity resources faced by many organizations.4

The question remains: how should cybersecurity leaders in organizations leverage machine learning to sniff out malevolent attacks? The answer combines the power of people with the power of machines. Heather Adkins, director of information and privacy and a founding member of Google’s security team, recommends that companies “pay some junior engineers and have them do nothing but patch”.However, as Machine Learning cybersecurity algorithms are fed more data and supplemented with isolation capabilities that confine breaches for human study, a symbiosis between people and machines can prove more effective than silo-ed efforts to combat malicious threats online.

What might more concrete solutions look like? Attend our upcoming panel discussion to find out more.

Aligned with the goals of Cybersecurity Awareness Month, True Interaction is co-hosting a panel discussion this Thursday evening alongside the law firm PWBT (Patterson, Belknap, Webb and Tyler LLP), the digital agency Tixzy, and the strategic IT services provider Optimum Partners. Topics will include a high-level discussion on the current cybersecurity regulatory landscape, 3rd party risk and insider threats.

 

{{cta(‘cd8528cc-12b8-43a0-bb5e-53671d3cfdbd’)}}

 

1. https://www.forbes.com/sites/forbestechcouncil/2017/08/07/what-is-machine-learning/#2de9f74679a7

2. https://www.britannica.com/technology/artificial-intelligence

3. https://www.cisco.com/c/en/us/products/security/what-is-cybersecurity.html

4. https://www.forbes.com/sites/forbestechcouncil/2017/08/21/separating-fact-from-fiction-the-role-of-artificial-intelligence-in-cybersecurity/#13ad1fe81883

5. https://www.cnbc.com/2017/09/18/google-security-veteran-says-internet-not-secure-ai-wont-help.html

Categories
Insights

Sparking Digital Communities: Broadcast Television’s Answer to Netflix

In the late 1990s and early 2000s network television dominated household entertainment. In 1998, nearly 30% of the population in the United States tuned into the NBC series finale of “Seinfeld”. Six years later, NBC’s series finale of the popular sitcom “Friends” drew 65.9 million people to their television screen, making it the most watched episode on US network TV in the early aughts. Today, nearly 40% of the viewers that tuned into the “Game of Thrones” premier viewed the popular show using same-day streaming services and DVR playback. The way people watch video content is changing rapidly and established network television companies need to evolve to maintain their viewership.

While linear TV is still the dominant platform amongst non-millenials, streaming services are quickly catching up. As young industry players like Hulu, Netflix and Youtube transform from streaming services to content creators and more consumers cut ties with cable, established network broadcasters need to engage their loyal audience in new ways. The challenge to stay relevant is further exacerbated by market fragmentation as consumer expectations for quality content with fewer ad breaks steadily rise.


Courtesy of Visual Capitalist

One advantage broadcast television still has over streaming services is the ability to tap into a network of viewers watching the same content at the same time. In 2016, over 24 million unique users sent more than 800 million TV related tweets. To stay relevant, network television companies are hoping to build on this activity by making the passive viewing experience an active one. We spoke with Michelle Imbrogno, Advertising Sales Director at This Old House about the best ways to engage the 21st century audience.

“Consumers now get their media wherever and whenever it’s convenient for them. At “This Old House”, we are able to offer the opportunity to watch our Emmy Award winning shows on PBS, on thisoldhouse.com or youtube.com anytime. For example, each season we feature 1-2 houses and their renovations. The editors of the magazine, website and executive producer of the TV show work closely together to ensure that our fans can see the renovations on any platforms. We also will pin the homes and the items in them on our Pinterest page. Social media especially Facebook resonates well with our readers.“– Michelle Imbrogno, Advertising Sales Director, This Old House

Social media platforms have become powerful engagement tools. According to Nielsen’s Social Content Ratings in 2015, 60% of consumers are “second screeners” – using their smartphones or tablets while watching TV. Many “second screeners” are using their devices to comment and interact with a digital community of fans. Games, quizzes and digital Q & A can keep viewers engaged with their favorite programming on a variety of platforms. The NFL is experimenting with new engagement strategies and teamed up with Twitter in 2016 to livestream games and activate the digital conversation.

“There is a massive amount of NFL-related conversation happening on Twitter during our games and tapping into that audience, in addition to our viewers on broadcast and cable, will ensure Thursday Night Football is seen on an unprecedented number of platforms.”-NFL Commissioner Roger Goodell ,”

With social media optimization (SMO) software, television networks can better understand their audience and adjust their social media strategy quickly. Tracking website traffic and click rates simply isn’t enough these days. To stay on trend, companies need to start tracking new engagement indicators using Synaptik’s social media intelligence checklist:

Step 1: Integrate Social Listening Tools

The key to understanding your audience is listening to what they have to say. By tracking mentions, hashtags and shares you can get a better sense of trending topics and conversations in your target audience. Moreover, this knowledge can underpin your argument for higher price points in negotiations with media buyers and brands.

Step 2: Conduct a Sentiment Analysis

Deciphering a consumer’s emotional response to an advertisement, character or song can be tricky but sentiment analysis digs deeper using natural language processing to understand consumer attitudes and opinions quickly. Additionally, you can customize outreach to advertisers based on the emotional responses they are trying to tap into.

Step 3: Personality Segmentation

Understanding a consumer’s personality is key to messaging. If you want to get through to your audience you need to understand how to approach them. New social media tools like Crystal, a Gmail plug-in, can tell you the best way to communicate with a prospect or customer based on their unique personality. This tool can also help you customize your approach to media buyers and agents.

By creating more accessible content for users and building a digital community around content, television networks can expect to increase advertising revenue and grow their fan base. With Synaptik’s social listening tools, companies have the advantage to track conversations around specific phrases, words, or brands. Sign up for a 30 minute consultation and we can show you what customers are saying about your products and services across multiple social media channels online (Facebook, Twitter, LinkedIn, etc.).

Contributors:

Joe Sticca, Chief Operating Officer at True Interaction

Kiran Prakash, Content Marketing at True Interaction

by Nina Robbins

Categories
Insights

How the IoT Can Bring Down Healthcare Costs

Healthcare is a multi-billion dollar industry, and that’s not going to change anytime soon. The financial figures go both ways – revenues and costs – but for most of the people involved in healthcare especially consumers, it boils down to the latter.

Healthcare costs are high for a reason. The processes, products and technologies used in the industry undergo strict quality control checks to ensure their effectiveness and resources are needed to create and deploy such components.

From a business standpoint, if stocks were to be used as a basis for healthcare costs, even people with limited knowledge on financial markets can understand how massive this industry is and why the costs of medicine increase annually. In an article by Business Insider, it stated healthcare stocks have remained strong even after several other stocks fell after the Presidential Inauguration. And according to FXCM’s article on how to value a stock, they suggest while a stock’s valuation may differ from its intrinsic value, healthcare remains a compelling sector as baby boomers are now entering their senior years.

Fortunately, technology is also becoming a means to cut healthcare costs. Among the most promising innovations that could potentially make this possible is the Internet of Things (IoT).

The tech titan IBM enumerated the advantages of integrating the IoT into healthcare and the first on the list is reduced costs. A concrete example was given: real time patient monitoring. Non-critical patients can be monitored even at home, thereby decreasing hospital admissions and unnecessary costs.

Mubaloo revealed IoT-dependent technologies can be implemented in medical products such as RFID tags, beacons and even ‘smart beds’. Due to the large amount of equipment used by medical personnel, it’s a costly – not to mention time-consuming – task to track every piece, but with tiny modifications such as the installation of RFID chips, the process becomes much more efficient.

Beacons, on the other hand, can be placed near patient rooms or hospital wards, which can then be updated with the corresponding patient data or any relevant info to reduce costs on printed materials and other similar articles. ‘Smart beds’ can be used to notify doctors or nurses regarding the activity of their patients, which then lessens the need for frequent hospital rounds.

Moreover, Aranca discussed the prevalence of tech wearables in the US and Europe. Wearable devices are now specifically developed for functions such as tracking vital signs. This adds to the potential of remote patient monitoring as well as managing particular diseases. For instance, a wearable tracker may be used to measure a person’s glucose levels to help avoid or manage diabetes. Apple is reportedly developing this technology, and CNBC revealed that the first person to be tested is the firm’s CEO, Tim Cook.

More and more devices are getting connected each year, and experts estimate that around 20 billion devices will be interconnected by 2020 based on research. With such a rapid phase of development, it’s only a matter of time before innovations such as the aforementioned wearables get officially rolled out across the industry.

As global healthcare turns more reliant on technology and connectivity, the Internet of Things will be utilized further in various parts of the industry. And with reduced costs now highly feasible, hopefully more people will be able to have access to the quality healthcare that they deserve.

Categories
Insights

The Top 9 Things You Need to Know About BI Software

Learning more about the data your business collects is important to evaluating the decisions you make today, next year, and in the next decade; that’s called business intelligence. But while software can do that for you, figuring out what software you should use can be a perplexing process.

The first step to evaluating that software is to figure out the platform that you’ll be using—both its workflow and its platform. You’ll also need to establish your goals and objectives, and understand who needs to use that business intelligence. Any software like this will require training—both on its purchase and on its continued use. And your software should also provide solutions for security.

As you evaluate your software, you have to ask questions—and more questions. How much support will you get and what features are on your roadmap.

Want to work through the options? Use this handy graphic below for steps and concerns

by Michael Davison

Categories
Insights

Achieving Continuous Business Transformation Through Machine Learning

In the past, I’ve blogged about applying Agile methodology to businesses at large: what we are seeing in business today is that the concept of “business transformation” isn’t something that is undergone once or even periodically – business transformation is becoming a continuous process.

Today, businesses at large – not just their creative and development silos – benefit from operating in an Agile manner, most importantly in the area of responding to change over following a plan. Consider the words of Christa Carone, chief marketing officer for Xerox:

Where we are right now as an enterprise, we would actually say there is no start and stop because the market is changing, evolving so rapidly. We always have to be aligning our business model with those realities in the marketplace.”

This is an interesting development, but how, technically, can businesses achieve True Interaction in a continually transforming world?

Business Transformation: Reliance upon BI and Analytics

In order for an organization’s resources to quickly and accurately make decisions regarding a product, opportunity or sales channel, they must rely upon historic and extrapolated data, provided by their organization’s company’s Data Warehouse/Business Analytics group.

In a 2016 report by ZS Associates that interviewed 448 senior executives and officials across a myriad of industries, 70% of the respondents replied that sales and marketing analytics is already “very important” or “extremely important” to their business’ competitive advantage. Furthermore the report reveals that in just 2 years time, 79% of respondents expect this to be the case.

However, some very interesting numbers reveal cracks in the foundation: Only 12% of the same respondents could confirm that their organization’s BI efforts are able to stay abreast of the continually changing industry landscape. And only 2% believe business transformation in their company has had any “broad, positive impact.”

5 Reasons for lack of BI impact within an organization

1) Poor Data Integration across the business

Many Legacy BI systems include a suite (or a jumbled mess) of siloed applications and databases: There might be an app for Production Control, MRP, Shipping, Logistics, Order Control, for example, with corresponding databases for Finance, Marketing, Sales, Accounting, Management Reporting, and Human Resources – in all, creating a Byzantine knot of data hookups and plugins that are unique, perform a singular or limited set of functions, and are labor intensive to install, scale and upgrade.

2) Data collaboration isn’t happening enough between BI and Business Executives

Executives generally don’t appear to have a firm grasp of the pulse of their BI: only 41% of ZS Associates’ report participants thought that a collaborative relationship between professionals working directly with data analytics and those responsible for business performance exists at their company.

3) Popular Big Data Solutions are still siloed

Consider Ed Wrazen’s critique of Hadoop: During an interview at Computing’s recent Big Data and Analytics Summit, the Vice-President of Product Management at data quality firm Trillium Software revealed:

“My feeling is that Hadoop is in danger of making things worse for data quality. It may become a silo of silos, with siloed information loading into another silo which doesn’t match the data that’s used elsewhere. And there’s a lot more of it to contend with as well. You cannot pull that data out to clean it as it would take far too long and you’d need the same amount of disk storage again to put it on. It’s not cost-effective or scalable.”

4) Data Integration is still hard to do.

Only 44% of ZS Associates’ report ranked their organizations as “good” or “very good” at data aggregation and integration. 39% said that data integration and preparation were the biggest challenges within the organization, while 47% listed this as one of the areas in their organization where its improvement would produce the most benefit.

5) Only part of the organization’s resources can access BI

Back in the day, BI used to be the sole province of data experts in IT or information analyst specialists. Now companies are seeing the benefits of democratizing the access and analysis of data across the organization. Today, a data analyst could be a product manager, a line-of-business executive, or a sales director. In her book Successful Business Intelligence, Cindi Howson, author and instructor for The Data Warehousing Institute (TDWI) famously remarked:

“To be successful with BI, you need to be thinking about deploying it to 100% of your employees as well as beyond organizational boundaries to customers and suppliers… The future of business intelligence centers on making BI relevant for everyone, not only for information workers and internal employees, but also beyond corporate boundaries, to extend the reach of BI to customers and suppliers.”

Business leaders should examine these symptoms in the context of their own organizations.

Is there a solution to these issues?

True Interaction CEO O. Liam Wright has a novel approach to a new kind of BI solution. One that involves machine learning.

“In my 20 years in the business I’ve discovered several worlds in business that never spoke to each other properly, due to siloed information spaces, communications, platforms and people. Today’s fluid world necessitates changing the BI game completely: If you want to have a high level of true interaction between your systems, platforms, customers, internal multiple hierarchical department structures, then you NEED to flip the game around. SQL-based Dashboards are old news; they are so 2001.

You can’t start with a structured, SQL based situation that inevitably will require a lot of change over time – organizations don’t have the IT staff to continually support this kind of situation – it’s too expensive.”

Instead Liam took a different approach from the beginning:

I thought, what if we capitalized on the sheer quantity and quality of data today, and captured data in unstructured (or structured) formats, and put this data into a datastore that doesn’t care what type of data it is. Then – as opposed to expensive rigid, SQL-based joins on data types, instead we implement lightweight “builds” on top of the data. These lightweight builds enable businesses to start creating software experiences off of their base dataset pretty quickly. They also enable organizations to get Business Intelligence right out of the box as soon as they perform a build – dynamic dashboards and data visualizations which can be come much more sophisticated over time, as you pull in and cross-pollinate more data. Then, when the data is further under control, you can create software agents that can assist you in daily processes of data, or agents that get those controls out the door.

What is this BI and Machine Learning marriage, exactly?

So what exactly is Liam describing? Modern Business has just crossed over the threshold into an exciting new space – organizations can routinely implement an integrated Machine Learning component that runs in the background, that can ingest data of all types from any number of people, places, and platforms, intelligently normalize and restructure it so it is useful, run a dynamic series of actions based upon data type and whatever specified situations contexts your business process is in, and create dynamic BI data visualizations out-of-the-box.

True Interaction’s machine learning solution is called SYNAPTIK.

SYNAPTIK involves 3 basic concepts:

DATA: SYNAPTIK can pull in data from anywhere. Its user-friendly agent development framework can automate most data aggregation and normalization processes. It can be unstructured data, from commerce, web, broadcast, mobile, or social media. It can be audio, CRM, apps, or images. It also can pull in structured data for example, from SAP, Salesforce, Google, communications channels, publications, Excel sheets, or macros.

AGENT: A software agent is a program that acts for a user or other program in a relationship of agency, to act on one’s behalf. Agents can be configured to not only distribute data intelligence in flexible ways but also directly integrate and take action in other internal and external applications for quicker transformation to enhance your business processes and goals.

An Agent is composed of two parts: The operator and the controls. Think of an operator as the classic Telephone Operator in the 1940s that manually plugged in and unplugged your calls in the background. SYNAPTIK enables you to see how the operator works:

Operators can be written in several forms, such as Javascript, PHP / cURL, or Python. Organizations can write their own operators, or True Interaction’s development team can write them for you. An Agent also gives the user a control interface – a form field, or drag and drop functionality, in order to add specific assets, or run any variety of functions. In addition SYNAPTIK makes it easy to connect to a REST API, enabling developers to write their own own software on top of it.

BUILD: A build simply brings the DATA & AGENT components together, ultimately to enable you to better understand your organizations various activities within your data space.

A new model of BI: What is the Return?

– Machine Learning Platforms like SYNAPTIK enables organizations to create wide and deep reporting, analytics and machine learning agents without being tied to expensive specific proprietary frameworks and templates, such as Tableau. SYNAPTIK allows for blending of internal and external data in order to produce new valuable insights. There’s no data modeling required to drop in 3rd party data sources, so it is even possible to create reporting and insight agents across data pools.

– With traditional methods, the process of data normalization requires hours and hours of time – indeed the bulk of time is spent on this today. This leaves very little time for deep analysis and no time for deep insight. With SYNAPTIK, what takes 400 monthly hours to data manage now takes 2 minutes, opening up 399.99 hours of analysis, discovery, and innovation time to deliver the results you need.

– Not only is it possible create your own custom reports/analytic agents, SYNAPTIK enables organizations to share their reporting agents for others to use and modify.

– The inherent flexibility of SYNAPTIK enables businesses to continually provide new data products/services to their customers.

– Not so far down the line: Establishment of The Synaptik Marketplace, where you can participate, monetize, and generate additional revenue by allowing others to subscribe to your Agents and/or Data.

All of these returns contribute to not only augmenting organizational leadership and innovation throughout its hierarchy, but also producing incredibly valuable intelligence monetization, break-thru revenue, as well as improved “client stickiness” with the rollout of new data products and services. And, best of all, it puts businesses into a flexible data environment that does, quite elegantly, enable continuous transformation as industries, markets, and data landscapes continue to change.

We’ve got the Experience you Need

True Interaction has logged 100,000’s of hours of research, design, development, and deployment of consumer products and enterprise solutions. Our services directly impact a variety of industries and departments through our deep experience in producing critical business platforms.

We Can Integrate Anything with… Anything
Per the endless variable demands of each of our global clients, TI has seen it all, and has had to do it all. From legacy systems to open source, we can determine the most optimal means to achieve operational perfection, devising and implementing the right tech stack to fit your business. We routinely pull together disparate data sources, fuse together disconnected silos, and do exactly what it takes for you to operate with tight tolerances, making your business engine hum. Have 100+ platforms? No problem. Give us a Call.

by Michael Davison

Categories
Insights

Can Blockchain help Media’s Data Challenges?

There has been a lot of discussion around blockchain and its framework in the financial services world. However in our organization’s continuing research as well as operations with our clients we are beginning to uncover specific opportunities that span into the media space. We feel blockchain can and should serve as a universal and cost effective foundation to address data management and insight decision management – regardless of organizational size, output, or industry.

There have always been three distinct challenges in our collective experiences here at True Interaction:

1. Data aggregation and normalization:

With more and more linear and digital channel fragmentation, the process of aggregating and normalizing data for review and dissemination has weighed down efficient insight decision making. One way to look at it is: you spend 40-60% of your time aggregating and normalizing data from structured and unstructured data types (PDFs, emails, word docs, csv files, API feeds, video, text, images, etc.) across a multitude of sources (media partners, internal systems, external systems & feeds, etc.). This leaves little time for effective review and analysis – essential to determining any insight.

2. Data review, reporting, dashboards:

Once you have your data normalized from your various sources you can then move to building and producing your reporting dashboards for review and scenario modeling. This process is usually limited to answering questions you already know.

3. Insight and action

Actionable insight is usually limited, usually because the most time and resources are allocated to the above two steps. This process should entail a review of your data systems: are the providing you with insights outside the questions you already know? In addition, where and how can “rules” or actions into other processes and systems be automated? Today the are plenty of business situations that exist where manual coordination and communications are still needed or utilized in order to take action – thereby obviating any competitive edge to execute quickly.

Blockchain can provide an efficient and universal data layer to serve your business intelligence tool set(s). This can help consolidate internal data repositories when aggregating and normalizing data. It can also be the de-facto ledger for all data management activities as well as the database of record to ease and minimize management of propriety data platforms and processes.
The following are additional resources:

How will Blockchain will Transform Media and Entertainment

http://blog.www.true.design/how-blockchain-will-transform-media-and-entertainment

Blockchain technology: 9 benefits & 7 challenges

https://goo.gl/uJfh4R

Blockchain just fixed the Internet’s worst attribute

http://blog.www.true.design/blockchain-just-fixed-the-internets-worst-attribute

By Joe Sticca

Categories
Insights

Blockchain Just Fixed the Internet’s Worst Attribute

Blockchain was originally conceived in 2008 and implemented in 2009 as a way to record all public transactions in the Bitcoin database, but the technology has applications across nearly every industry. Today, we are seeing a huge amount of investment in blockchain technology, and some exciting movement across several industries regarding its implementation, and with good reason. At its essence, the wonder of blockchain comes down to this: finally, electronic transactions of all varieties can be made possible without the need for a bank.

Today I want to discuss and explore blockchain and how it can revolutionize attribution, provenance, and copyright –the Achilles’ heel of digital media. We’ll also take a look at some companies that are spearheading the technology leap.

We are at an amazing point in history for artists. A revolution is going to happen, and next year it’s going to take over. It’s the ability of artists to have the control and the say of what they do with their music at large. The answer to this is in the blockchain. ~Imogen Heap, Grammy Award-winning singer-songwriter

Here’s how blockchain will fix the internet with regards to digital media: In the same way as blockchain technology provides an open, decentralized ledger of monetary transactions, it can also be purposed as a self-verifying database of other types of time-stamped events – one such event could be, for example, the registration of a copyright. A blockchain may also attach a hash of the work itself, the metadata associated with it, and even any information about permitted uses of the piece of media. Following this idea, any new instance of the work – without that metadata – would not match the original record and therefore would obviously be identified as copy. “This can also help and empower artists and rights holders to manage and track the use of their work, as well as tie back into audience demographic data,” says True Interaction COO and resident digital media guru Joe Sticca.

Blockchain technology may well stabilize the value of digital media. Anyone who uses social media is intimately aware of the phenomenon of sharing something online, and watching it morph and be stripped of its metadata as it makes its way across pinboards, Tweets, and/or Reddit. Think of the ramifications of blockchain: Finally, an internet where, when you share something, it intrinsically contains the information regarding its creator as well as the contract for its use. And while this technology is purposed to squash piracy, its effects will fuel a whole new world of online commerce.

Here are a few companies that are leading the charge:

Blockai

Originally envisioned as a ‘Netscape for Bitcoin’ in 2015, Blockai intended to utilize blockchain for a kind of social media stream that would allow users to send messages and authenticate items. Now Blockai has announced it has raised $547,000 in seed funding to relaunch as a blockchain copyright service: a tool that allows artists to authenticate and claim copyright over images. With Blockai, the process is as follows: Create a piece of digital art, photo or anything that can be copyrighted. Register your copyright on the blockchain, a public ledger powered by Bitcoin. The record is permanent and immutable. Then, receive a registration certificate with cryptographic evidence that protects your copyright. You own the certificate forever.

Revelator

Revelator solves copyright challenges in the music industry by integrating sales, marketing, accounting and analytics into one unified system based upon blockchain technology, with fair pricing and levels of service for the individual artist, a manager, or a record label, Recently, the company announced it has raised $2.5 million in Series A funding led by Exigent Capital, with participation from Digital Currency Group and Israeli early-stage fund Reinvent (Revelator is headquartered in Tel Aviv). That’s on top of the $3 million the company has already raised.

Verisart

Verisart works in the same way, but in the realm of physical and fine art. The digital startup intends to chronicle original artworks, prints, multiples, and books by using block chain, thereby assigning each object a unassailable certificate of authenticity. Each object’s provenance, in turn, is created step-by-step by its owners, who add their e-mail addresses to an object’s data when they purchase one. Ultimately, Verisart plans to catalogue older artwork by extending the verification service to online sites and artists’ estates, and eventually begin to work with art appraisers and insurers to add works that have already been validated. The company has recieved $2M in funding from Earlybird Venture Capital, Freelands Ventures, and Digital Currency Group.

Monegraph

Monegraph specializes in attribution of media with multiple creators or complex attribution. The Monegraph platform enables sharing of revenue across the value chain of media distribution for online broadcasts, video clips, image reels, and other licensed or brand sponsored content. According to Monegraph, the ability to distribute media via their revenue sharing infrastructure aligns the interests of all stakeholders in the creation and promotion of the content in such a way that distribution is dramatically amplified. This allows the media to reach localized communities of viewers not obtainable through centralized points of distribution. Monegraph has raised $1.3M in funding for its platform.

Ascribe

Ascribe is another copyright platform that enables creators to claim authorship and generate a certificate of authenticity on a piece of media. More than 20 marketplaces and services have integrated the platform, and close to 4,000 artists are using the service. The company raised $2 million in seed from Earlybird Venture Capital, Freelands Ventures, Digital Currency Group, and various angels.They’re a developer-friendly company that includes plenty of documentation around their REST API. Ascribe also includes features such as setting limited editions for pieces of digital media, loaning or renting a piece of digital media (granting a temporary license for specific range of dates), and tracking the chain of ownership of a work of art.

By Michael Davison