Mastering Your MarTech Stack – January 9th’s LA Marketing Analytics Group

Jan 18 LAMAG Banner for Blog

Most CMOs today spend more on technology than many CTOs. With a new “shiny object” appearing daily, it’s hard to know where to focus. What AdTech/MarTech tools should you be using? What 1st, 2nd, 3rd Party Data will improve your marketing? How do marketers make sense of it all?

At January 9th’s LA Marketing Analytics Group,  Mark Osborne, Senior Director of Client Success at Conversion Logic, will share his strategic framework for evaluating current marketing challenges through the lens of potential technology solutions. Included, are interactive calculators for ranking current organizational readiness, comparing alternate solutions, and tracking your path to digital transformation covering many popular buzzwords and acronyms like CRM, DMP, attribution, automation, and more. After his presentation you’ll be able to create a strategic vision for your organization and prioritize marketing technology projects that will make the biggest impact, with the least effort, as quickly as possible.

Mark has spent his entire career in marketing technology, digital strategy, attribution and much more. Currently, he is writing a book titled “Mastering Your MarTech Stack: A guide to getting the most from customer data and marketing technology” which will release sometime in 2018.

Details:

Where:  General Assembly – Santa Monica (map)
When:  January 9th at 6:30-9:00pm
Cost: Free (Appetizers and beverages provided by eSage Group)

RSVP Now Button

Starbucks’ Nirvana – Nov 28 – Seattle Mktg Analytics Meetup

Rani MadhuraSr. Advanced Analytics Manager at Starbucks, will share with us details around Starbucks’ mission critical Digital Order Management Initiative. The initiative’s goal is to consolidate the Mobile and Café orders to improve customer wait time via creatively managed Order and Item Sequencing decisions.

She will also share on Starbucks’ Capacity Planning Initiative, including

1.  Insights derived from their integration of the cross-functional datasets from Asset Management, Labor Productivity and Store Level Transactional Data Systems

2.  Key Metrics identified to effectively manage inventory levels and dynamic labor allocation

Rani will finally discuss ROI, best practices and future considerations for both initiatives.

Don’t miss this one!!

RSVP Now

*****************

eSage Group (http://bit.ly/2f0JdMj) is the organizer and sponsor for the LA (http://bit.ly/2nhl5GP) and Seattle Marketing Analytics Groups.

Blockchain- Transforming Marketing Attribution & Beyond – Nov 9th LA Marketing Analytics Meetup

LAMAG Banner for Blog

Blockchain is a hot topic these days! What is it? What does it mean for marketing and analytics?

Come to this month’s LA Marketing Analytics Group Meetup to hear Miguel Morales & Sam Kim, Co-Founders of Kr8os, give a crash course in Blockchain and then discuss its use in Open Attribution Modeling and Programmatic Affiliate Marketing.

Where:  General Assembly – Santa Monica (map)
When: November 9th at 6:30-9:00pm
Cost: Free (Appetizers and beverages provided by your organizer, eSage Group)


eSage Group  is the organizer and sponsor for the LA and Seattle Marketing Analytics Groups.

 

Guest Post: The Digital Transformation of Retail

By ShiSh Shridhar, WW Director of Business Intelligence – Retail Sector, Microsoft

You go shopping; let’s say it’s a national hardware store because you have a painting project you’ve decided to tackle this weekend. You have done your research online, chosen the paint and now you are at the store to pick up your supplies and get started. But, when you reach the section with the paintbrushes, you realize you’re not exactly sure what you need. You stand there for a moment trying to figure it out, and then you start looking around, hoping a sales associate will appear. And one does! She’s smiling, and she’s an expert in paints and yes, she can direct you to the brush you need — and reminds you to pick up some blue tape.

Shish 1

Shopping miracle?

No, shopping future. This kind of positive customer experience is one of the many ways that artificial intelligence (AI), sophisticated data gathering, and the cloud are being used to empower employees, bring consumers into stores, and shorten the path to purchase. These advancements help brick and mortar retailers compete with online retailers in today’s world. It’s the digital transformation of retail, and it’s happening now in ways big and small.

AI + Data = Retail Revolution

This transformation is driven by data, which today can come from any number of sources. In this example (a product called Floor Sense from Microsoft partner company Mindtree), the data is collected from security cameras already in place throughout the store. The cameras capture footage of how people move through space, where they stop and what they do as they shop. The video feed is then analyzed using AI that has been trained to understand how a customer acts when he or she needs help. When that behavior is recognized, a sales associate with the right expertise is sent to talk to the customer and help the customer make a decision.

But a store’s proprietary data is only the tip of the iceberg. Today, there are millions of data points that are either publicly available or easy to purchase from companies like Experian and Acxiom. Retailers can combine that demographic data with their existing CRM data to model behavior and build micro-segmentations of their customer base. Insights from that narrow analysis allow retailers to personalize, predict and incentivize in ways that are far more accurate than ever before.

Putting data insights into the hands of employees

Already, that kind of analysis has helped make online shopping more productive with relevant, timely offers. The next step for retailers is to learn how to make data-driven insights useful to store employees, as in the hardware store example, so they can enhance the customer’s in-store experience. The data could come from a customer’s interactions with the retailer’s app, chat bots, social media, in-store beacons or Wi-Fi, all of which, when compiled, allows a retailer to make extremely accurate inferences about a given customer’s behavior.

Managed well, those insights help a store employee serve a customer better. Managed poorly, personalized targeting in-store has the potential to spook customers. To handle it well, retailers must do two things: First, any in-store tracking should be done through a consumer opt-in, with transparency about how the retailer will use the information. Second, the customer deserves a good value exchange; it must be clear to her how she is benefitting from sharing her information with the retailer, and how her information contributes to delivering her a frictionless shopping experience.

Using a customer’s digital exhaust to everyone’s benefit

As consumers explore purchasing options and develop their preferences using search tools, social media, apps, and in-store visits with a device in hand they leave behind a digital exhaust. Today, advances in AI, data aggregation, and the cloud allow retailers to collect that digital exhaust to generate a style profile of prospective customers, which can then be used to introduce those customers to other products they might like. In this so-called phygital world — where the physical and digital overlap — retailers can combine data from multiple places to make inferences that will help them sharpen their marketing approach. The techniques are at hand — now, it’s up to creative retailers to find innovative ways to use those insights to inspire their customers, and shorten the path to purchase.

This article was originally posted on Independent Retailer.

The Great Data Migration – Part One

June - A Field Guide - Blog Banner.pngI don’t care who doubts our ability to succeed, as long as it’s nobody on my team.
– Kobe Bryant, Los Angeles Lakers Guard

Prepare For Takeoff

Everyone, these days, is jettisoning on premise storage and sending their data to the cloud. The reasons are varied, but generally come down to two factors: time and cost. Cloud storage, from any of the major providers like Amazon or Microsoft, can cost less than $0.02 per GB per month. Compare that to Apple’s revolutionary magnetic hard drive that debuted in 1981. It had 5MB of storage and cost $3,500, which is over $700,000 per GB. Ok, there was no monthly fee, but I digress. 😉 Time is usually how long it takes to get a new server, file share, or document repository installed in your corporate headquarters vs simply storing new data in Amazon S3 or Microsoft Azure. Or perhaps it’s the amount of IT resources that are needed to keep depreciating and out classed data centers up and running.

There are many advantages to cloud storage, which won’t be rehashed here. If you need a refresher (or convincing), this site may come in handy: http://www.enterprisestorageforum.com/storage-services/cloud-storage-vs.-on-premise-11-reasons-to-choose-the-public-cloud-1.html

For the moment, let’s assume that you have decided to move your data to the cloud. This article will help you decide where to move it, how best to do so, and an ideal way to keep it updated and fresh.

 Where To Go?

In today’s cloud landscape, there are two players: Amazon and Microsoft. There are others, such as Google, but Amazon Web Services (AWS) and Microsoft Azure hold the keys. In addition to storage, they both offer services such as Virtual Machines, Caching, Load Balancing, REST interfaces, Web hosting and more, which can handle your other applications, should you need to migrate them to the cloud in the future. There are pros and cons to each, but both will handle your data securely, provide timely and cost effective access, and transparently maintain ready to use backups in case of unforeseen events. Let’s break them both down:

blog - aws s3AWS S3 (Simple Storage Service) is, as the name states, pretty simple. It has a free tier with 5GB of data and then breaks down into 3 categories – Standard, Infrequent Access (IA), and Glacier. If you just need to stash old data in the cloud and have no idea how it will be used in the future, use IA or Glacier for extremely cheap storage. Glacier is only $0.004 per GB vs Standard at $0.023 per GB per month (US West Region). The trade of with Glacier and IA is that it takes a little longer to get at the data you want to use, anywhere from a few minutes to several hours. Data can be moved up down from Standard, IA, and Glacier tiers so, for instance, those old application logs that no one was using can quickly be made available for reporting when needed.

Standard storage is what most people use for quick access to data. For the first 50TB per month, the price is $0.023 per month (US West Region). Anything can be stored here, such as images, binary files, text data, et. AWS S3 uses “buckets” to contain data, which can have an unlimited number of object. Each object within a bucket is limited to 5TB in size. For a breakdown on AWS S3 pricing, go here: https://aws.amazon.com/s3/pricing/.

We’ll discuss how to migrate data to S3 a bit later. For now, know that access to your S3 data is through the AWS web console and a variety of APIs, such as the AWS S3 API, the s3:// or s3n:// file protocols, et. AWS S3 is secure by default, with only the bucket / object creators having initial access to the data. Permission is granted via IAM roles, secret / access keys, and other methods that are out of scope for today. A good primer for S3, including security, can be found at the S3 FAQ: http://aws.amazon.com/s3/faqs/.

blog - azure storageAzure Storage has a lot more options that AWS S3. This can be confusing at first but also offers the most in terms of flexibility and redundancy for your data. There is a small free tier, and as a counterpart to AWS Glacier, Azure Storage offers “Azure Cool Blob Storage” for your archival, compliance, or other “don’t use but can’t throw away” data. Prices are usually less than $0.01 per GB per month in some regions.

Unlike S3, Azure Storage comes in several flavors of redundancy, so one can choose how many backups of their data exist and how easily they are accessed. If you have easily replaceable data, say from a 3rd party API or data source, then choose the cheaper LRS (Locally Redundant Storage) option which will keep local copies of your data in a single Azure data center. Need a durable, always available, “a crater can hit my data center yet I’m still OK” option? Then RA-GRS (Read-Access Geographically Redundant Storage) is the preferred option. This will ensure that copies of your data are also maintained at a second data center hundreds of miles away, yet always available for easy access. Middle ground hot and cool options exist as well. For a breakdown of Azure Storage pricing, please visit here: http://azure.microsoft.com/en-us/pricing/details/storage/blobs/.

Note: AWS S3 is functionally equivalent to Azure Storage GRS (Geographically Redundant Storage), so use this option when comparing prices.

Azure Storage uses “containers”, instead of buckets like S3, and containers can contain blobs, tables, or queues (discussed below). There is no limit to object size, yet each container can “only” hold 500TB. However, there can be multiple containers per storage account. Access to data is through the Azure Portal, Azure Storage Explorer, PowerShell, the Azure CLI (command line interface), and other APIs and file protocols.

Aside from regular blobs, there are a couple different types of data that can be stored in containers: tables and queues. Think of both as convenient layers laid over the raw data, to ease read and write access for different applications and scenarios.

 blog - azure storage tablesAn Azure Storage (AST) is essentially a NoSQL key-value store. If you’re not sure what that is, then you likely don’t need it. 🙂 NoSQL data stores support massive scalability yet the dataset and server sharding that is normally necessary (and a headache) for this is handled for you. AST, like other NoSQL datasets, supports a flexible schema model which allows one to keep customer data, application logs, web logs, and more – all with different schemas – in the same table. Learn more here: http://azure.microsoft.com/en-us/services/storage/tables/.

blog - azure Storage QueueAzure Storage Queue (ASQ) provides cloud-based messaging between application components. Having a central messaging queue is critical for different applications and parts of applications that are often decoupled and need to scale independently of one another. This would only likely be needed if you have applications which currently store message data on-premises, but needs to be migrated to the cloud. The size of each message is limited to 64K, but there can be an almost unlimited number of messages (up to the container limit). Learn more here: https://azure.microsoft.com/en-us/services/storage/queues/.

Another option unique to Azure in general, is the ability to link your corporate Windows Active Directory or Office 365 Active Directory with the Azure Active Directory (Azure AD). This feature, called Azure AD Connect, allows SSO (single sign on) between on-prem and cloud based applications and services. It is easy, for example, to quickly setup permissions and roles that manage access to essential services and storage across your organization.

For a rundown on security and encryption options, please visit: http://docs.microsoft.com/en-us/azure/storage/storage-security-guide.

This is great for raw storage, but now what about my DATABASE?

Almost every organization has relational data. While this can be extracted and placed into raw storage, it’s often easier to just lift it entirely into the cloud and go from there. Both Amazon and Azure platforms support numerous relational database hosting options, from Azure’s SQL Database to AWS’s Relational Database Service and many options in between. We’ll look at some of them here:

blog - aws rds.pngAWS Relational Database Service (RDS) allows easily deploying and scaling relational databases in the cloud. It frees one from the hassle of managing servers, patching, clustering and other IT heavy tasks. Also, it supports six different flavors of database: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server. One unique option is the ability to offload read traffic to one or more “Read Replicas” and thus increase availability and performance of your primary database instance. Database security differs depending upon your database flavor (some support encryption at rest, et) but the AWS RDS itself can be secured by being deployed within an organization’s AWS VPC (Virtual Private Cloud). In my opinion, AWS as a service has a simpler approach to security than Azure, because more AWS services can be setup behind the VPC, which acts as a gateway to sensitive data and applications. Learn more about AWS RDS here: http://aws.amazon.com/rds/.

blog - aws redshift.pngAWS Redshift is Amazon’s data warehouse in the sky. Essentially a supersized PostgreSQL, it provides scalable, cost-effective SQL based storage that includes queries that can run both on S3 (via Redshift Spectrum) and Redshift. It stores data in a columnar-based fashion, giving fast query times over massive amounts of data. It might be overkill if your dataset is small, but if you have petabytes (or exabytes) of structured data that need analyzing quickly, Redshift can likely handle it. Start with AWS Redshift’s home page here: http://aws.amazon.com/redshift.

blog - aws athenaAWS Athena is a new service which attempts to blend the raw and relational data worlds together. Simply point it at an S3 bucket, define a schema, write your SQL query, and go. You only pay for the queries run on the raw data and the schema definition can be reused with other queries, modified for another run, or simply tossed away when finished. Also, Athena can turn around and store the results back into AWS S3 or be used by another workflow to furnish data. By not having a permanent relational layer, data workflows and ETLs have less steps and less points of failure. Learn more about AWS Athena here: http://aws.amazon.com/athena/.

blog - azure SQLAzure SQL Database is as simple as it gets: An SQL Server database as a service. No Virtual Machines or licenses to manage, no patching, lots of redundancy, and fast performance. One can pay per database or in “elastic pool database units” (EPDUs) which can be spread resources across many databases. Azure SQL database is meant for small to medium size databases (up to 50GB) that can operate independently from one another, as in a multi-tenant application or reporting solution. If you have a lot of SQL data to migrate, it is a good idea to break it up and store it along date or business lines or make the jump to Azure SQL Data Warehouse, a larger service meant for enterprise intentions (see below). Connections are made through a standard ODBC / JDBC connection string, with the URI as the service endpoint for your database.

Keep in mind, that this is not the same as full SQL Server. Since there is no real “server” involved, most system stored procedures (DBCC, et) and distributed transactions won’t work and SQL Server peripheral services, such as SQL Server Reporting Services (SSRS) and SQL Server Integration Services (SSIS) are not included. These voids can be filled by other services in the Azure stack; however, or by running a full copy of SQL Server in an Azure Virtual Machine (see below). Although a look at Azure Analytics is out of scope here, you should know that Azure supports an entire range of analytical services which can consume data from Azure SQL databases. Learn more about Azure SQL Database here: http://azure.microsoft.com/en-us/services/sql-database.

blog - azure SQL Data WarehouseAzure SQL Data Warehouse (ASDW) is for enterprise grade data warehouse deployments. Like AWS Redshift, one can scale compute and storage independently and prioritize analytical workloads or long term storage. It also lets you pause the compute side entirely, turning ASDW into an archival data store when analytics aren’t needed. It also leverages Microsoft’s PolyBase service, which allows queries to execute against other data sources, such as Azure Data Lake, Azure HDInsight, or even another on-prem SQL Server data warehouse. Unlike Azure SQL database, Azure SQL data warehouse stores data in a columnar-based format, for maximum performance and scalability. Learn more about the Azure SQL Data Warehouse here – http://azure.microsoft.com/en-us/services/sql-data-warehouse.

 Rolling Your Own – AWS and Azure Virtual Machines

Of course, if you need advanced SQL Server features, such as SQL Server Analysis Services (SSAS), or want to run a completely different database type, you can always spin up a Virtual Machine (VM) and install the relational database software there. Many VM images from both AWS and Azure come with SQL Server, Oracle, or other software preinstalled, so all you need is your licensing information. Also, some images include the cost of the database software, effectively renting the license to users for a monthly fee. This can be useful, for example, if you would like to try out the features of SQL Server Enterprise before making a full purchase. Virtual Machines are also useful when ETLs and data workflows need to also be migrated to the cloud, as the VM can simply host the software required to run it.

NOTE: When you go the VM route, you are usually responsible for hardware provisioning/formatting, software patches, service upgrades, and maintaining secure access (through firewall rules, etc) to your system. A good pro/con for evaluating Azure SQL Database vs SQL Server on Azure VMs can be found here: http://docs.microsoft.com/en-us/azure/sql-database/sql-database-paas-vs-sql-server-iaas.

Now, how do I get it up there?

Alright, you’ve chosen your cloud platform, you know what data to move… or do you? How do you prioritize what goes and what stays? Stay tuned for The Great Migration – Part II, where I’ll cover next steps in how to lift your data into the cloud.

Hope this helps and happy migrating! Feel free to email me at jasonc@esagegroup.com with any additional questions!

Sincerely,  J’son 

 

The Future of Enterprise Analytics

Over the last couple weeks since the 2016 Hadoop Summit in San Jose, eSage Group has been discussing the future of big data and enterprise analytics.  Quick note – Data is data and data is produced by everything, thus big data is really no longer an important term.

hspeopleeSage Group is specifically focused on the tidal wave of sales and marketing data that is being collected across all channels, to name a few:

  • Websites – Cross multiple sites, Clicks, Pathing, Unstructured web logs, Blogs
  • SEO –  Search Engine, Keywords, Placement, URL Structure, Website Optimization
  • Digital Advertising – Format, Placement, Size, Network
  • Social
    • Facebook – Multiple pages, Format (Video, Picture, GIF), Likes (now with emojis), Comments, Shares, Events, Promoted, Platform (mobile, tablet, PC) and now Facebook Live
    • Instagram – Picture vs Video, Follows, Likes, Comments, Reposts (via 3rd Party apps), LiketoKnow.it, Hashtags, Platform
    • Twitter – Likes, RT, Quoted RT, Promoted, Hashtags, Platform
    • SnapChat – Follows, Unique views, Story completions, Screenshots.  SnapChat to say the least is still the wild west as to what brands can do to engage and ultimately drive behavior.

Then we have Off-Line (Print, TV, Events,  etc). Partners. 3rd Party DataDon’t get me started on International Data. 

Tired yet?

blog

While sales and marketing organizations see the value of analytics, they are hindered by what is accessible from the agencies they work with and by the difficulty of accessing internal siloed data stored across functions within the marketing organization – this includes central corporate marketing, divisional/product groups, field marketing, product planning, market research and operations.

Marketers are hindered by access to the data and the simple issue of not knowing what data is being collected.  Wherever the data lies, it is often controlled by a few select people that service the marketers and don’t necessary know the value of the data they have collected.  Self-service and exploration is not possible yet.

Layer on top this the fact that agile marketing campaigns require real-time data (at least close real time) and accurate attribution/predictive analytics.

So, you can see there are a lot of challenges that face a marketing team, let alone the deployment of an enterprise analytics platform that can service the whole organization.

Now that I have outlined the business challenges, let’s look at what technologies were mentioned at the 2016 Hadoop Summit that are being developed to solve some of these issues.

  • Cloud, cloud, cloud– lots of data can be sent up, then actively used or sent to cold storage on or off prem.  All the big guys have the “best” cloud platform
  • Security – divisional and function roles, organization position, workflow
  • Self-Service tools – ease of data exploration, visualization, costs
  • Machine Learning and other predictive tools
  • Spark
  • Better technical tools to work with Hadoop, other analytics tools and data stores
  • And much more!  

Next post, we will focus on the technical challenges and tools that the eSage Group team is excited about.

Cheers! Tina

 

 

 

Get Marketing Insights Fast Without a Data Hostage Crisis

shutterstock_210349615The landscape for marketing analytics solutions is more cluttered than ever with multiple options and approaches for marketing departments to consider.  One option that we are seeing more and more of is a seductive offering that promises a simple, fast, nearly turnkey approach to getting analysis and insight from your growing stacks of data.  The offer is this: a vendor will import your data to their systems, do analysis on it with their in-house experts, and come back to you with insights that will help you run your business better.

No doubt, this is an attractive offer if you are like many marketing organizations, struggling to get internal resources to help consolidate data and do the analysis required to get you the insights you need.   Business Intelligence resources are hard to find in your company, the data holders in IT are backlogged and short staffed.  You need insights now to help engage and sell to your customers and are done waiting on internal resources so why not go this route?  While likely a quick, tactical solution that will get you answers in the near term, there are several major drawbacks to this solution as a longer term strategy.

Market leading organizations know that their data is a significant asset that, when used well, can help them better understand and engage their customers, anticipate customer needs, cross sell, upsell, and stay ahead of the competition. As part of making data a core competency, your organization has to do the hard work to intimately know its data, its strengths, its shortcomings, and understand what it can tell you about your business.  That intimate understanding of data only comes from digging in, “doing the homework,” investing in the infrastructure and skillsets to excel at business intelligence inside the organization.  Organizations that have this kind of understanding of their data are continually improving the quality of data in their organization and building the kind of sustainable internal BI capability that actually adds significantly to the value and sustainability of the company.  C-suite, take note!

If you outsource that knowledge, you may get the answers you seek fast, but you do not get the sustainable, growing capability in-house that becomes a core differentiator for your company and helps you lead the market. datahostagecalloutI’m amazed when I hear this but it is very common practice. What if your vendor company goes out of business, gets acquired, changes business models or you decide to change vendors?  Your vendor is holding your data hostage.  What are you left with then?  All the money you spent bought you yesterday’s insights but you have no investment or capability towards the future.   Your team has none of the knowledge or infrastructure to sustain and continue to grow that flow of business intelligence that is critical to serving your customers and staying ahead of your competition.   You are back to zero.

Fair enough you say, but damn it, I still need insights now and I can’t wait any longer.  Tactical and non-sustainable is better than nothing right?  Well consider that it doesn’t have to be an “all or nothing” approach.  There is a way to get fast and sustainable.  You can start with a partner who gets you to the critical insights you need now, but is doing it on your systems, building out infrastructure you own (be it in the cloud on your behalf or on premises), and is helping mentor your team members along the way.  You may spend a little more along the way to do this, but in this approach you are investing, not just paying a monthly fee with no incremental addition of value to your company.  Very quickly you will be way ahead.

If the vendor you pick, in this case, goes out of business, moves on, or you decide to part ways, there may be some short term pain, but you own the assets, data, and business logic they built and you have team members who have been working directly with the technology and data, “doing the homework”, and can keep you moving forward.  Nobody has your data held hostage.

The right choice for a vendor should:

  • Have deep experience utilizing the cloud to get you up and running fast, with limited need for hardware purchase and support.  The cloud is great but make sure it is your, cloud, not someone else’s.
  • Work with you to understand your unique needs, data, internal team skills and challenges, and creates a roadmap to Business Intelligence ROI internally.
  • Provide all the senior BI talent you need now to get answers fast, but also help you grow that skill in house, with training, new employee interviewing and ongoing mentoring.  They need to have a demonstrated understanding that knowledge transition to your team is part of the deliverable and be committed to providing it.

Pick a partner who can help you avoid having your data taken hostage, while getting you the insights and ROI you need fast!

Written by Duane Bedard, eSage Group President and Co-Founder