Learning Analytics – Overcoming initial barriers

We could be looking at changing higher education over next 10-15 years as data becomes more & more available. More and more educational institutions are striving to leverage learning analytics to make better decisions, provide personalized learning to improve learning outcome of students and optimize resources. What is Learning Analytics? Learning analytics is the collection, analysis and reporting of data about students and their contexts for the purpose of understanding and optimizing learning and the environments in which it occurs.

It is found during a recent survey done by a well known agency that many participants from Universities do see the benefits of analytics but have serious concerns about privacy protection as well as quality of data. Concerns are around privacy, data ownership issue, who owns the data “data ownership” and unwillingness to share the data. Sometime the ethical issues seem to be enormous, ranging from who really owns the data, to what are the institutions’ stewardship responsibilities toward it, to moral considerations about what kinds of research questions are appropriate to study.

If we look at the Learning Analytics data, from where it is collected? This data is student data derived from their interactions with the institutions systems such as Learning Management System. Since it is student data, it is logical to say that data in a meaningful way should be accessible by each student so that they can understand and help themselves. In other words, learning analytics outcome should be made available to students so that they can benefit by it. If we don’t do it, why are we collecting data at the first place?

I don’t have any doubt that educational institutions need to have a framework for data standardization, data consistency, data federation along with balance security across the organization in order to realize maximum benefits from the analytics. “Garbage in – Garbage out” is known to everyone.  In addition, the data needs to “de-siloed” in order to reap the significant benefits of higher education analytics. As far as privacy protection is concerned, data can be masked to protect individual’s identities like the way it is done in healthcare or insurance industry. The payoff from learning analytics could be huge.

One of the key components for success of Learning Analytics efforts is to identify the right individuals such as various data owners on your campus to include in the whole process and work with these individuals to cultivate a balance approach towards it. Identify quick wins from a small analytics project first, rather than going for a big bang approach. Show the results, value and ROI to others, get their buy in and do the subsequent phased of the project.

Data can be collected in such a way that all the different data systems stored around a student’s education profile from kindergarten to higher education be available and easily accessible to the institution. Analytics on this data will help educators to have a better understanding of the students and their learning profile. The day is not too far, when all this data will be available to most of the educational institutions and probably to employers as well. I believe the analytics on this data must be shared with students as well so that they become aware of their own strengths and weaknesses. This will enable full personalization of the learning experience.

Educators will have responsibility to ensure that the power and potential of any new information produced as a result of learning analytics is used to benefit those students first, who are the most vulnerable to disengagement and disenfranchisement.


Justifying a Big Data Project – Good Math – Bad Presentation

IT investment decisions are easy. Right? If you’re projections show that you’ll get back more than you spend, in either cost savings or increased revenue, you do it. Sounds easy. It usually is not. But, it needs to be.

Let’s say you’re exploring a big data or a master data project. You know you should do it. You know that it makes business sense. But, the finance guys want hard numbers, and that’s often not easy to get.

CFOs often demand to see some pretty complicated numbers such as ROI. NPV. IRR. Payback period. So, business folks go to great lengths to come up with those calculations.

But, I’ve found, in the end, to get the sale, you’re analysis and presentation need to be brain-dead simple and a no-brainer. If you aren’t comfortable with the numbers, if you don’t fully understand them and can explain them really easily and convincingly, don’t even bother to go to the top to ask for the budget approval. The answer should slap you in face. “Yes, of course, we have to do it,” must be the obvious conclusion. And remember, it all has to be measurable.

The most important thing I learned in business school was how to do analysis on the back of a napkin. Literally, you should be able to outline the ROI for a business project on a napkin. I’ve done it before. I once helped convince the management team of a startup to sell the company and lock in a good return, rather than continue to invest for another three years in the hopes of a higher return, by scribbling a few numbers on my coffee stained napkin (I drink a lot of coffee) in a staff meeting.

A. Bird-in-hand return now: $10/share offered by a potential acquirer.


B. Potential return in three years. = ($15/share)

  • Revenue would be 70% higher (20% per year increase target)
  • Stock price of 4X revenue. (Typical for a company growing 20%)
  • Stock dilution of 25% because we’d need to raise $10 million

=$10*(1+(0.7*.75))=$15.00 per share (potentially)

The simple result was a modest potential upside. I did not bother to risk adjust anything or do NPV with a fancy calculation. My colleagues knew the incredibly high market risks in their minds. We were in a very competitive market and needed to make significant product enhancements to remain competitive.

The decision was a no-brainer. We took the deal.

Yes, we put the whole thing in a fancy spreadsheet later, but that was really all a formality. The real decision had been made in that conference room on that napkin.

You should apply a similar approach when you’re trying to get buy-in for a Big Data Analytics or Master Data Management or other strategic data project.

Let’s look at two manufacturing companies. Both make, or have made, acquisitions fairly regularly. Both IT departments knew they needed to handle their master data better. They had all the usual problems – data silos, incomplete data, quality problems, imperfect customer service, etc. Both had lots of inefficiencies because various groups didn’t know what other groups were doing. Both companies had the idea to integrate big data across their various divisions so they cold run more analytics to optimize their businesses.

While their challenges were similar, each company took a different approach to justifying the project. One tried to justify the project via increased sales. The other through reduced costs.

1. Industrial materials manufacturer – ROI would come from increased sales – better cross-selling and thus higher productivity for the telesales staff.

2. Air conditioner manufacturer – ROI would come from the cost savings derived from reducing the cost of maintaining master data across multiple systems and divisions. E.g. Much easier to enter new customers or modify customer information enterprise-wide.

One was way easier to calculate and measure than the other. Guess which one got funded much faster.

Company 1 stated that increasing telesales productivity by 15% would way more than pay for the project. It got funded right away. They also projected a variety of cost savings. But, the obvious advantage of the increased sales was the most convincing number. The rest was gravy. The project is implemented and the results are exceeding their expectations.

Company 2 collected a lot of data and wrote a 10-page report and 15-slide presentation basing their justification on reduced data maintenance costs of IT and LOB personnel. They calculated that they spent tens of thousands of IT man-hours per year in master data related activities, and significantly more with LOB personnel in the business units. By making those processes and those employees more efficient, they estimated $5 million in annual savings, far more than the cost of the project. They calculated an NPV of the savings of $10 million and IRR of 170%. But, it took 10 pages and 30 minutes to explain.

Working with outside consultants deeply knowledgeable and experienced in master data and data quality projects, they came up with twelve ways to save money across a variety of groups and processes, totaling many hundreds of employees. For each of the twelve different processes and types of personnel, they estimated different productivity improvement coefficients, ranging from 5 to 25%. They calculated that they’d save millions from reducing both master data maintenance and data errors. They built a big spreadsheet to calculate the savings. They transposed the spreadsheet into a few PowerPoint slides, each with about 40 or 50 numbers on them.

Great analysis.

Bad presentation.

They are still working towards getting approval. They need to simplify their approach. They also need to make sure the results are clearly measurable. It’s hard to track man-year savings across many divisions and job functions. Perhaps, they should concentrate on one major group and apply the average of all the productivity coefficients and come up with a few simple measures that justify the project and can be measured. All the detail is great, but present it in a highly simplified way.

I have a background in statistics and math. I’m somewhat of a geek. I like numbers. But, first and foremost, I’m a businessman. I have a steadfast belief that when you are making business decisions, throwing more math, and especially throwing higher-level math, at decision-making can easily result in diminishing returns. If you can’t very easily and quickly explain the numbers to your bosses with full confidence, then don’t even bother. Simplify it all first.

Build an ROI-based business case for your big-data project.

Big Data Analytics are increasingly essential to gaining competitive advantage.  But, where do you start?

Intelligently analyzing more data results in better business decisions. Right? I should just dig in and do it. Right? Well, not necessarily. As the volume of structured, unstructured and semi-structured data accelerates, you should start by answering a few business-oriented questions:

  • Where, when and how do I make big data a strategic advantage?
  • Which of my business processes will benefit the most from big data analytics?
  • After you make those strategic business driven data strategy questions, then you ask the technical and project questions:
  • How will I deal with large and rapidly growing data volumes and poor performance?
  • How do I integrate and analyze new data sources, such as unstructured data?
  • What tools do I need to achieve this?
  • How do I get there?
  • How big an effort will it be?

So, you are asking, “How can Big Data technologies, tools and processes transform my organization with game-changing capabilities?”

Your approach to big data analytics should start with business strategy.  Target business processes where a data-centric approach can drive significant improvements.   What data, analytics and KPIs will provide a significant business ROI?  Before you can accurately determine ROI, your first technical step should be to evaluate your data quality and completeness.  You need to know how much work you have to do in terms of data cleaning, ERP systems enhancements and how much new data you are going to have to collect.  For example, you might have to alter your business systems to make sure you are collecting good data on an ongoing basis.  Once you know the amount of work needed, you can build an accurate ROI-based business case.

Once the business case is made, you’ll dive into choosing specific technologies.  There are lots of choices to make, including analytics, business intelligence, data visualization, in-memory technologies, columnar and MPP databases, Hadoop-based systems, data warehouse appliances, big data integration and cloud storage platforms.  Make your choices with sustainability and evolution at the center of your thinking, so that you can continue to benefit from, and expand, your investments, building on them, as opposed to building a one-off.

Evaluating, installing, configuring and implementing cutting-edge in-memory database appliances or real-time data warehousing solutions is exciting.  They promise the advantage of high-capacity, parallel computing performance for your big data endeavor.  But remember, these technology decisions are not made in a vacuum.  They are made with business process and ROI at the forefront.  And, make sure your solution is designed to be flexible, and scalable, in terms of performance with future add-on capacity to avoid unnecessary up-front costs due to over-provisioning.  Keep your eye on the ROI ball.

Share This: Facebook Twitter Linkedin Email