Machine Learning Nastel

What Learning Can Learn From Machine Learning

Nastel Technologies®
January 20, 2022

Over the years, this biweekly letter has provided me with the opportunity to fully and fairly document just how much free time college students can have if they try. My college roommates tried really hard. They found time to make prank calls to the campus literary magazine, create enough frost in our fridge to throw snowballs out the window on 90-degree days, leave old pizza in the entryway for the stated purpose of growing penicillin for a roommate who couldn’t afford antibiotics, and organize campus recruiting events for fake investment banks. When these time-wasting activities required a fake identity, the persona of choice was John W. Moussach Jr., an alumnus turned successful Midwestern industrialist. (We don’t hear enough about successful industrialists these days – another downside of digital transformation.)

 

Last week I looked online for remnants of John W. Moussach Jr. and came upon neither the Wikipedia page my roommates built after graduating nor the Moussach aphorism that somehow made it onto Wikiquote (“We have all heard the Will Rogers quote ‘I never met a man I did not like.’ In my youth, I met a World War I veteran who had met Will Rogers. The veteran told me, ‘I never met a man I did not like until I met Will Rogers’”), but rather an article on something called Study Sive which purports to feature higher education news.

 

The article mentioned John W. Moussach Jr. in the second line, but then devolved into Moussach babble:

By famous acclaim, Moussach’s excellent quote is subsequent. Creating John W. Moussach Jr. I took a ton of work for no obvious purpose. The equal is alas proper of the kingdom of online getting to know. Sure, tens of many online degree programs within the U.S. Have made better training more available than ever earlier. But in stark assessment to the effect of online transport on each other service, to date, online studying has didn’t make American higher schooling greater affordable.

 

It turns out that the Study Sive article, ostensibly written by Cindy G. Fryer, an unusually attractive “social media evangelist and certified beer guru,” and dated January 9, 2022, was actually a 2019 Gap Letter mangled by some bad algorithm that adopted a synonym for every other word – an attempt to avoid detection that can fully and fairly be called Moussachian.

 

“Cindy Fryer,” who “writes” all the “articles” on Study Sive, is an example of bad artificial intelligence (AI), or perhaps artificial stupidity. But in my daily digital encounters, “Cindy” is the exception, not the rule. We’re all experiencing more and more good AI. 52% of companies have accelerated AI adoption due to Covid-19 and 86% agree that doing so has become the norm. In higher education, AI is powering online discussion boards and chatbots improving student outcomes. Last summer Times Higher Education postulated that AI “will soon be able to research and write essays as well as humans can.” Last month Google/Alphabet announced that its DeepMind subsidiary had built an AI algorithm that can read and respond to questions at a high school level.

 

While what or who powers “Cindy” may never be uncovered, what powers good AI is machine learning. Machine learning initially comprised human-constructed algorithms that parsed data and predicted outcomes. If the outcome was incorrect, a human had to make adjustments to improve the algorithms. But that all seems as ancient as Will Rogers. Machine now learn based on software that mimics the networks of neurons in our brains. Data goes in, outcomes come out, same as before. But in between are thousands of layers of digital “neurons.” Today’s machine learning involves constructing giant mathematical models by pairing massive amounts of input data with correct outcomes and then training the software to form neural networks that produce the steepest gradient from input to outcome. The machine learning-constructed model or network is ultimately able to accurately recognize/classify/predict correct outcomes or make correct decisions without any human programming or even understanding of how it works.

 

If that sounds a bit tricky, it’s because it is. The natural language processing (NLP) engine released by Google in 2020 was trained on 45 terabytes of data and produced a model with 175 billion parameters. The next version had 1.75 trillion parameters – 10x growth in only 7 months. As machine learning advances are correlated to the volume of available data, Covid’s acceleration of digital transformation is doing the same for a field that was already progressing at an unfathomable pace.

 

This is why, in the pantheon of skills gaps, the data/machine learning/AI skills gap is the most consequential or existential. And why, just before Christmas, the Senate voted unanimously to pass the AI Training Act, a bill focused on educating government civilian leaders on AI. And why just after Christmas, President Biden signed into law the National Defense Authorization Act for 2022, which has a number of provisions related to training on AI including establishing a new community college for the Navy.

 

An AI-for-dummies explanation won’t close this gap. But by understanding the basics of machine learning, there are a few lessons (or gradients) to be drawn for K-12 and postsecondary education, which haven’t changed nearly enough from the days of our parents and grandparents, and which may bear some responsibility for your inability to understand machine learning. (So don’t think of this as AI-for-dummies, but rather AI-for-smart-people-failed-by-the-education-establishment.)

 

While machines have made remarkable progress when it comes to learning, humans need help. Here are a few lessons for schools from the bright new star in the learning firmament:

 

1. Importance of Clear Learning Outcomes

Machines only learn when the desired outcome is clear i.e., when a clear output or objective function can be defined: what exactly are we trying to get the subject to learn? In contrast, the vast majority of degree programs and courses for humans don’t start with clear learning outcomes. They start with what faculty want to teach (typically what they’ve always taught, and often took themselves as students). Nor do individual classes. When’s the last time you heard of an instructor starting a class with a clear expression of a learning outcome? To the extent four-year colleges and universities have learning outcomes, they’re an accreditation-process-driven afterthought and expressed in terms so broad they’re as fruitless for machine learning as they are for human learning (see e.g., English 2700 at Cal State L.A.: “analyze a text’s relationships to its cultural contexts” and “read intratextually and intertextually, making comparative connections within the texts themselves and with other literary works”).

 

While K-12 does better in this regard (public schools are required to meet state standards), few standards would pass machine learning muster in terms of clarity. And although new higher ed platforms like eLumen aim to reorient curriculum development around outcomes (in eLumen’s case, starting with institutions with the greatest urgency i.e., community colleges), we’re unlikely to see major changes for a few more years; as students continue to vote with their feet, non-selective institutions will have no choice but to unbundle and simplify current programs into linked series of skills-based learning experiences.

 

2. Primacy of Assessment

Only once we can assess that a given algorithm, model, or network is producing correct classifications or predictions can a machine begin to learn. So the twin quasar of a clear learning outcome is assessment. And while every K-12 and postsecondary course incorporates assessments, and while programs and courses with clear(er) learning outcomes are less likely to shy away from rigorous summative assessments, it’s relatively rare to find assessments closely tied to learning outcomes.

 

If the combination of learning outcomes and assessments rings a bell, that bell is probably competency-based learning. Commencing not with curriculum but rather competencies that graduates are expected to exhibit (as expressed by employers, for example), competency-based learning programs and courses are architected around assessments that test for desired competencies. Then and only then do we turn to the task of developing curricula to best prepare students for these assessments. Ballyhooed 20 years ago, and even more 10 years ago, with the notable exception of online everyday low pricing leader Western Governors University, competency-based education has been a bust. Employers don’t understand competency-based programs and haven’t aligned hiring systems and processes accordingly. Students don’t care; in the absence of paired competency-based hiring, students see little difference between (online) competency-based programs and run-of-the-mill online programs.

 

3. Iterative Improvement

Machine learning is hard to effect on a human scale. Imagine if a supersized school district took a million similarly situated students and saw who performed best on an assessment tightly linked to a clear learning outcome. And then did the same thing across a thousand learning outcomes. And then repeated the assessment a million times, each time iterating curricula and delivery to teach all students the way the highest performing students were taught. One can imagine a thousand (or million) angry school board meetings.

 

But machine learning’s most important lesson for learning is simply to watch what works and do more of that. There are many proven instructional practices that teachers and faculty simply disregard. Practices like active learning, peer learning, and frequent formative assessments and small assignments (scaffolding) are supported by a great deal of evidence. The acceleration of online learning and data collected by learning management systems is making it easier to determine which practices correlate to student engagement and performance. But you’d be hard pressed to learn about any of these from observing what’s happening in classrooms – real or virtual.

 

This article originally appeared on forbes.com, to read the full article, click here.

Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT and many more.

 

The Nastel i2M Platform provides:

  • Secure self-service configuration management with auditing for governance & compliance
  • Message management for Application Development, Test, & Support
  • Real-time performance monitoring, alerting, and remediation
  • Business transaction tracking and IT message tracing
  • AIOps and APM
  • Automation for CI/CD DevOps
  • Analytics for root cause analysis & Management Information (MI)
  • Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics

Comments

Write a comment
Leave a Reply
Your email address will not be published. Required fields are marked *
Comment * This field is required!
First name * This field is required!
Email * Please, enter valid email address!
Website

Register to Download

Subscribe

Schedule a Meeting to Learn More

Become an Expert

Schedule a Demo