The Future of Enterprise Analytics

Over the last couple weeks since the 2016 Hadoop Summit in San Jose, eSage Group has been discussing the future of big data and enterprise analytics.  Quick note – Data is data and data is produced by everything, thus big data is really no longer an important term.

hspeopleeSage Group is specifically focused on the tidal wave of sales and marketing data that is being collected across all channels, to name a few:

  • Websites – Cross multiple sites, Clicks, Pathing, Unstructured web logs, Blogs
  • SEO –  Search Engine, Keywords, Placement, URL Structure, Website Optimization
  • Digital Advertising – Format, Placement, Size, Network
  • Social
    • Facebook – Multiple pages, Format (Video, Picture, GIF), Likes (now with emojis), Comments, Shares, Events, Promoted, Platform (mobile, tablet, PC) and now Facebook Live
    • Instagram – Picture vs Video, Follows, Likes, Comments, Reposts (via 3rd Party apps), LiketoKnow.it, Hashtags, Platform
    • Twitter – Likes, RT, Quoted RT, Promoted, Hashtags, Platform
    • SnapChat – Follows, Unique views, Story completions, Screenshots.  SnapChat to say the least is still the wild west as to what brands can do to engage and ultimately drive behavior.

Then we have Off-Line (Print, TV, Events,  etc). Partners. 3rd Party DataDon’t get me started on International Data. 

Tired yet?

blog

While sales and marketing organizations see the value of analytics, they are hindered by what is accessible from the agencies they work with and by the difficulty of accessing internal siloed data stored across functions within the marketing organization – this includes central corporate marketing, divisional/product groups, field marketing, product planning, market research and operations.

Marketers are hindered by access to the data and the simple issue of not knowing what data is being collected.  Wherever the data lies, it is often controlled by a few select people that service the marketers and don’t necessary know the value of the data they have collected.  Self-service and exploration is not possible yet.

Layer on top this the fact that agile marketing campaigns require real-time data (at least close real time) and accurate attribution/predictive analytics.

So, you can see there are a lot of challenges that face a marketing team, let alone the deployment of an enterprise analytics platform that can service the whole organization.

Now that I have outlined the business challenges, let’s look at what technologies were mentioned at the 2016 Hadoop Summit that are being developed to solve some of these issues.

  • Cloud, cloud, cloud– lots of data can be sent up, then actively used or sent to cold storage on or off prem.  All the big guys have the “best” cloud platform
  • Security – divisional and function roles, organization position, workflow
  • Self-Service tools – ease of data exploration, visualization, costs
  • Machine Learning and other predictive tools
  • Spark
  • Better technical tools to work with Hadoop, other analytics tools and data stores
  • And much more!  

Next post, we will focus on the technical challenges and tools that the eSage Group team is excited about.

Cheers! Tina

 

 

 

Seeking 4 mid/senior level engineers to work on a Cloud-based Big Data project

eSage Group is always on the lookout for talented developers at all levels.  We have worked hard to create a company culture of sharp, quick learning, hardworking professionals who enjoy being part of a winning team with high expectations.   As such, we hire self-motivated people with excellent technical abilities who also exhibit keen business acumen and a drive for customer satisfaction and solving our client’s business challenges.   We have quarterly profit sharing based on companywide goals, allowing everyone on the team to participate in and enjoy the rewards of our careful but consistently strong growth. We are currently looking to fill 4 openings to complete a team that will be working together on a large-scale “big data” deployment on AWS.

  1. Cloud-operations specialist who can design a distributed platform for analyzing terabytes of data using MapReduce, Hive, and Spark.
  2. Cloud-database engineer who can construct an enterprise caliber database architecture and schema for a high-performance Cloud-based platform that stores terabytes of data from several heterogeneous data sources.
  3. Mid/senior-level software developer with extensive experience in Java, who can write and deploy a variety of data processing algorithms using Hadoop.
  4. A technical business analyst who can translate business requirements into user stories and envision them through Tableau charts/reports.

1) Cloud-operations specialist: • Bachelor’s degree in Computer Science or related field; or, 4 years of IT work experience • Familiarity with open-source programming environments and tools (e.g., ant, maven, Eclipse) • Comfortable using the Linux operating system, and familiarity with command-line tools (e.g., awk, sed, grep, scp, ssh). • Experience working with Web/Cloud-based systems (e.g., AWS, REST) • Knowledge of database concepts, specifically, SQL syntax • Data warehouse architecture, modeling, profiling and integration experience • Comfortable using the command line (e.g., Bash), experience with systems deployment and maintenance (e.g., cron job scheduling, iptables) • Practical work experience designing and deploying large-scale Cloud-based solutions on AWS using EC2, EBS, and S3 • Working knowledge of one or more scripting languages (e.g., Perl, Python) • Experience using systems management infrastructure (e.g., LDAP, Kerberos, Active Directory) and deployment software (e.g., Puppet, Chef) • Programming ability in an OOP language (e.g., Java, C#, C++) is a plus 2) Cloud-database engineer: • Bachelor’s degree in Computer Science or related field; or, 4 years of IT work experience • Familiarity with open-source programming environments and tools (e.g., ant, maven, Eclipse) • Comfortable using the Linux operating system, and familiarity with command-line tools (e.g., awk, sed, grep, scp, ssh). • Experience working with Web/Cloud-based systems (e.g., AWS, REST) • Knowledge of database concepts, specifically, SQL syntax • Firm grasp of databases and distributed systems; expert knowledge of SQL (i.e., indexes, stored procedures, views, joins, SISS) • Extensive experience envisioning, designing, and deploying large-scale database systems both in traditional computational environments and in the Cloud • Ability to design complex data ETLs and database schemas • Desire to work with many heterogeneous terabyte-scale datasets to identify and extract Business Intelligence • Experience using multiple DBMS (e.g., MySQL, PostgreSQL, Oracle, SQL Server) • Work experience using Hive and NOSQL databases is a plus 3) Mid/senior-level software developer: • Bachelor’s degree in Computer Science or related field; or, 4 years of IT work experience • Familiarity with open-source programming environments and tools (e.g., ant, maven, Eclipse) • Comfortable using the Linux operating system, and familiarity with command-line tools (e.g., awk, sed, grep, scp, ssh). • Experience working with Web/Cloud-based systems (e.g., AWS, REST) • Knowledge of database concepts, specifically, SQL syntax • Excellent Java developer with knowledge of software design practices (e.g., OOP, design patterns) who writes sustainable programs and employs coding best practices • Ability to program, build, troubleshoot, and optimize new or existing Java programs • Several years development experience using both version control (e.g., SVN, Git) and build management systems (e.g., Ant, Maven) • Able to create and debug programs both within IDE environments and also on the command line • Working knowledge of Web development frameworks and distributed systems (e.g., Spring, REST APIs) • Experience using Hadoop ecosystem (e.g., MapReduce, Hive, Pig, Shark, Spark, Tez) to program, build, and deploy distributed data processing jobs • Programming ability in Scala is a plus 4) Technical Business Analyst: • Strong background in business intelligence • Minimum of 1 year using Tableau and Tableau server. • Able to work closely with cross-functional business groups to define reporting requirements and use-cases • Extensive experience manipulating data (e.g., data cubes, pivot tables, SSIS) • Passion for creating insight out of data and data investigation • Experience using R, Mahout, or Matlab is a plus Please send resumes to tinam (at) esagegroup (dot) com

What is the Internet of Things? Why should marketers care? RSVP Now to find out!

Internet of ThingsPlease join us in this lively evening event sponsored and organized by eSage Group with co-sponsors Pointmarc and Cloudera. Hear a panel of industry experts from the areas of Wearables, Home Security/Automation, and Automotive:

Duane Bedard from eSage Group, a thought leader in the marketing analytics space, will moderate a discussion delving into the Internet of Things, where are we today, what the future holds and how marketers can best be prepared to take advantage of it to improve effectiveness of marketing campaigns and develop new revenue streams.

The Learning Lounge is presented with PSAMA and DAA.

Details:

When: May 15, 2014 Time: 6:00pm – 9:00pm
Where: Club Sur 2109 First Avenue South SODO, just south of the Starbucks Center
Cost: $25 includes beverages (alcoholic and non) and appetizers Parking is free and readily available after 6pm on First and side streets.

RSVP Now IoT

 Check out the cool digs!

SUR Montage

DONT MISS IT!

 

Just announced! eSage Group’s Internet of Things Learning Lounge – May 15th

iot3A First-Hand Conversation on the Internet of Things

The Internet of Things will profoundly re-shape marketing. As objects in all corners of our lives become connected to the web what does it mean for brands and customers? To help provide insight, and a lively discussion, eSage Group and PSAMA has arranged a premium collection of senior-level thinkers from some of the area’s strongest brands, all together on one panel.

See first-hand how Alaska Airlines is evolving air travel, how Vivint is revolutionizing digital homes and how Nike continues to be an innovator in wearables all through understanding the opportunities from the Internet of Things.

Learning Lounge: An All-New Event Format

At the Learning Lounge, Seattle Marketing Analytics Group, PSAMA members, and non-members alike are invited to join together for an evening at SUR, one of Seattle’s most talked-about venues. Enjoy food and drink (ticket prices includes cocktails and light hors d’oeuvres) while learning from some of the Northwest’s most innovative brands how the Internet of Things is starting to re-shape marketing.

Expert Panel:

Details:

Date: May 15th, 21014

Location: Club SUR, SODO, 2901 1st Avenue S

Time: 6pm

Tickets are $25 for all Seattle Marketing Analytics Group members!

collage

 

Register here!

Special Thanks to our co-sponsors!

pointmarc_logo

 

The Learning Lounge is replacing the normal monthly MeetUp at Fado’s for the month of May. We will return to normal in June. 

If folks like it, The Learning Lounge will hopefully become a quarterly (or so) event.

The Shish List – Big Data at MarketMix

This article was posted by Shish Shirhar, Global Director of Big Data, Retail. He was one of our Making Big Data Smart Data panelists for last week’s PSAMA MarketMix.

http://blogs.msdn.com/b/shishirs/archive/2014/03/27/bigdata-at-marketmix.aspx0312_MarketMix1_thumb_13BAE14D

Thanks to Duane Bedard & Tina Munro of eSageGroup, I participated in a panel discussion at MarketMix with the brilliant Data Scientists: Jason Gowans of Nordstrom and Jon Francis of Nike. Duane Bedard did an excellent job of moderating the conversation.  I love his skill of condensing everything we said to short soundbytes that were ready to go on twitter 🙂 , and they did with hashtag #MarketMix.

Jon Francis shared interesting insights on the work he does on the ecommerce business at Nike. Jason Gowans runs the Nordstrom Data Lab and shared his perspectives on what it takes to walk the thin line between providing great customer experience through personalization and becoming “creepy”. We discussed where Big Data has proven value and where it has disappointed; talked about how Marketers can benefit from Big Data Analytics and what kind of data is useful for enhancing insights.

I spoke with several people in the audience after the session and one of the questions that stood out was: “How can we start small, working with multiple data sources?” My recommendation is to have a look at a tool that you most likely already have: Excel. Excel 2013 has some interesting new capabilities with PowerBI. The democratization of data and the democratization of tools is empowering everyone to get started working with multiple datasets, combine them and create great visualizations to derive insights. Check out some of the examples from my previous blog posts:

  1. Analyzing Seattle 911 Data using PowerBI
  2. Using Power Map to Analyze Public Retailer Data
  3. Visualizing Wal-mart
  4. Lets visualize Beer because it’s 5 pm somewhere
  5. How To: Visualize Real Time Flight Data & Correlate with Local Airport Weather

Repost: Breaking Down the Silos

Analytics 2.0I am reposting this because I have heard during several discussion over the past few days, how organizational silos are difficult to break down – harder than integrating the data.  Thoughts?

Harvard Business Review recently published the article “Advertising Analytics 2.0” in which it discusses the unprecedented amount of data available to help marketers manage their marketing mix in today’s multi-channel environment including both on and off-line. While there is vast opportunity, so is the challenge to sift through various data sources to tune down the “noise” and find actionable insights. Those that can effectively capture these insights have shown improvements in marketing performance of at least 10% to 30%.

The article states that companies must implement new analytic strategies to harness the power of the deluge of data. They point to 3 necessary activities – Attribution, Optimization, and Allocation. Attribution is the process of qualifying the contribution of each marketing activity. Optimization is the use of predictive analytics to run various scenarios. Allocation is then the redistribution of marketing resources in real-time.

Let’s first look at Attribution. The key to this step is to collect data from a wide variety of sources across your organization including sales, customer service, distribution, and finance; publicly available data like weather, traffic, unemployment rates and consumer confidence; and of course, the data from the various marketing tactics. By layering this information on top of each other in non-structured data stores, such as Hadoop, analysis can be done to look for correlations and causal effect. Getting your data out of the silos for analysis will allow you derive more robust insights.

Look for our next post on Thursday discussing Optimization and Allocation. In the meantime, check out our advice  on getting started with your analytics data integration project.

For the full Harvard Business Review article text, click here.

Insights into Big Data and Marketing

Recently, I came across an article by Michael Brenner, VP of Marketing and Content Strategy for SAquestionP and Forbes contributing author. He highlights that to derive value from Big Data, make sure that you start with a set up well thought out questions. He also refers to some tips that eSage Group’s own Dean Bedard wrote in a prior Information Management article about the same subject.

Check out Michael’s article here: http://bit.ly/13Iapji