Translate into a different language

Saturday, December 16, 2017

Four Surprising and Innovative Uses of eLearning in 2017 | eLearningInside News - Editor’s Picks

"In the sphere of eLearning, countless businesses, educators, and individuals not only helped to develop new education technology; they implemented it in exciting and creative ways" says Henry Kronk, began his writing career as an intern at The Burlington Free Press in Burlington, Vermont, his hometown. 

Photo: eLearningInside News
By all accounts, 2017 has been a year unlike any other in recent memory. And we’re not talking about troubling politics–both domestic and international–or social movements or the media. 2017 has, overall year, been a remarkable year when it comes to education technology.

Below we’ve compiled some unconventional and strange (but also, effective) eLearning initiatives that caught our eye.

KFC’s VR Training Module 
There was once a time when new fry cooks-in-training at Kentucky Fried Chicken would receive instruction from a manager or one of their superiors. But this summer, the fast food chain proved that the old model of employee training was downright 2000-and-late.

The new method they introduced included a VR simulation. But it was just some low-stress way to learn the dance steps: it was a gamified escape room-style module replete with the ghost of Colonel Sanders himself heckling you at every turn. Learners are not allowed to leave the room until they correctly prepare a basket of fried chicken.

Needless to say, employees enjoyed the new method far better than the previous training. What’s more, while it took an average of 25 minutes to bring new employees up to speed with in-person training, it took employees an average of 10 minutes to successfully complete the VR simulation...

Robots in Michigan State University Classrooms 
Many online degrees allow students to stream in to lectures at the brick-and-mortar version of their university, chat with their peers, or skype with their professors. But in some graduate education programs at Michigan State University, remote students are literally taking a seat at the table.

They do this through the use of cameras (equipped for two-way live audio and video streaming) mounted on self-balancing robots. Students can control the robots, move them around the room, pivot them to look at their peer’s or instructor’s face, and adjust several other features. By and large, it allows students to participate in a class discussion as if they were really in the room.

“I teach graduate courses where the primary pedagogy is discussion-based,” Professor Christine Greenhow said. “When you’re in a discussion with some people in the room and others streaming in, you have these faces on the screen and you’re trying to talk to someone, look at their face, look at the camera, and look at other people in the room. You can’t have the same interpersonal experience.” The robots have begun to solve this problem.

Source: eLearningInside News

If you enjoyed this post, make sure you subscribe to my Email Updates!

Boosting student performance with robot learning | Digital Journal - Technology

Photo: Tim Sandle
"Remote learning is a growing means of delivering education. A downside is with student engagement" summarizes Dr. Tim Sandle, Digital Journal's Editor-at-Large for science news.

File photo: A person at his workplace, communicating via a Video Relay Service video.
Photo: SignVideo, London, U.K

This can be overcome, according to new research, when robotic assistants are used.

The Michigan State University research has concluded that online students who elect to use the innovative robots can feel more engaged and connected to the instructor and students in the classroom. This, in turn, leads to better understanding on the part of the student and improved educational attainment.

In trials the researchers used robots located in the classroom. Each robot was equipped with a mounted video screen. The screen can be controlled remotely by the student who is undertaken the lesson online. This facility allows the student to pan around the room, looking at the teacher or other students or anything else that’s happening...

Commenting on the outcome, the head researchers, Professor Christine Greenhow notes that teachers also benefit from the experience. Here, instead of looking at a screen full of faces as per traditional videoconferencing, the teacher can look a robot-learner in the eye (via digital means)...

The results of the study have been published in the journal Online Learning. The research paper is headed "Hybrid Learning in Higher Education: The Potential of Teaching and Learning with Robot-Mediated Communication." 
Read more... 

Source: Digital Journal

If you enjoyed this post, make sure you subscribe to my Email Updates!

Experiences in Taylor Institute's 'forum' engage students in the art of dialogue and deliberation | UCalgary News

"Instructors invited to submit applications to teach in dynamic learning spaces; deadline for spring/summer applications is Jan. 30, 2018" inform Mike Thorn, Taylor Institute for Teaching and Learning.

“Public dialogues have deep historical roots across the world."
Photo: David Troyer, for the University of Calgary

Science in Society. Professional Communication and Interviewing. These course topics might not encourage immediate comparison, but three instructors who teach the courses in the Taylor Institute’s dynamic, adaptable forum — one of the building's three flexible learning spaces — find common ground in the value and importance of dialogue. In fact, these instructors argue that the forum comes to represent the content of the courses, manifesting the very act of learning through engagement.

Gwendolyn Blue, associate professor in the Department of Geography, emphasizes the crucial nature of respectful and critical conversation in learning about science in society. Critical exchanges help students work through challenging concepts and contentious topics that are part of everyday public dialogues.

“The course is grounded in dialogue and deliberation. We start with some ground rules, and those ground rules are that everybody speaks while appreciating that there are others in the room who may not hold similar assumptions and values,” she says. “We also are very conscious of some basics from rhetoric, such as no ad hominem attacks — criticize the argument, not the person. And so we keep our focus always on the argument. We’re also bound, because it’s about dialogue and deliberation, to consider all views on a topic, no matter how uncomfortable they might make us.”

Co-teaching a course called Professional Communication and Interviewing in the forum, social work instructors Sally St. George and Les Jerome believe that students benefit from watching instructors work together respectfully and thoughtfully. Watching collaborative teaching in action leads to effective collaborative learning.

Jerome reflects, “I think that students can clearly see that Sally and I both hugely respect each other, and I think that’s important for them to see.”

“We can’t predict everything that’s going to happen in the classroom,” St. George adds. “We can be quite well-planned, but we can’t predict, and so we also have to demonstrate that spontaneity. That’s so important; the students have to see us doing that.”

Learning by exchanging ideas
Both courses’ instructors appreciate the forum’s technological capacities, but more strongly emphasize the possibilities for engagement offered by the room’s most basic attributes: movable chairs and round tables...

Learning through dialogue
Both classes use the Taylor Institute forum’s movable round tables and chairs to incorporate regular group discussion and active learning. This method gives students the opportunity to engage in the kinds of collaborative processes that cut across disciplines. It’s all about having the space required for meaningful, learner-directed conversation...

The Taylor Institute invites instructors teaching university-level courses to submit applications to teach in TI learning spaces. 
Visit our Learning Spaces webpage to find out more information and to submit your application. 

Source: UCalgary News

If you enjoyed this post, make sure you subscribe to my Email Updates!

NC teacher pursues ASU master’s degree through distance learning | Valley Courier - Community

Alamosa News writes, "Being a single mother of three and teaching full-time doesn’t stop Covey Denton from setting a high bar." 

Covey Denton, of North Carolina, appreciates the Adams State Teacher Education online master’s program, which helps her inspire students in science.
Photo: Courtesy

“My goal is to be the most amazing science teacher my students will ever have,” she says from her home in North Carolina. “I want to develop a profound love of science in my students through the activities and material I cover in my classroom. I want to spread my love of science to every single student that enters my room.”

Denton is pursuing her master’s degree through Adams State University’s graduate distance degree program. She enrolled in the fall of 2016 to the Adams State Teacher Education Department Master of Arts in Education Curriculum and Instruction with Endeavor STEM Leadership Certificate. She will graduate in December 2018.

The Adams State program has given Denton access to unique opportunities and resources she didn’t realize existed. “The forums to communicate with like-minded individuals have given me feedback and helped me grow as a teacher.”

The flexibility Adams State online master’s program works well with Denton’s schedule. “I am a single mom of three kids who has eight grades of lessons to prep.” She teaches preschool through 6th or 7th grade, depending on the year. “The online classes allow me to work ahead when I have spare time in my schedule and allow me to pace myself and plan.” She appreciates the well-organized classes and user-friendly format. “The NASA classes with the call-in classroom meetings are easy to schedule after the kids’ bedtime and allow me to really focus on the content being offered. I have enjoyed the prompt communication from my instructors and felt like I benefited a great deal from each course I have taken.”

The courses through Adams State’s online program have also increased Denton’s awareness of diversity in the classroom. “The courses through Adams State have helped me understand the needs of my students and best practices in the classroom, and allowed me to develop my own teaching philosophy and style.”

Source: Valley Courier

If you enjoyed this post, make sure you subscribe to my Email Updates!

Op-ed: Let science educators build new science standards | Deseret News

Photo: John R. Taylor
John R. Taylor, serves as president of the Utah Science Teachers Association. He is also an Associate Professor of Biology and Assistant Dean for Integrative Learning at Southern Utah University notes, "As Utah begins the process of revising the state science standards for elementary and high school, it’s a good idea to take a moment to ask why we teach science to K–12 students at all?"

University of Utah graduate Margarita Ruiz teaches during a class at Bryant Middle School in Salt Lake City on Monday, May 22, 2017. 
Photo: Alex Goodlett, Deseret News

Science, engineering and the resulting technologies are interwoven into our lives and will be integral in meeting humanity’s most pressing future challenges. National data illustrate the need for highly skilled workers with strong backgrounds in these fields and the need is steadily increasing. 

Finally, the Utah Science Teachers Association believes that all citizens should have a scientifically based understanding of the natural world in order to engage meaningfully in public discussions, be informed voters and discerning consumers. 

Problems arise when nonscience ideals impede the teaching and learning of science, either through the use of pseudoscience or the avoidance of topics because they are politically charged. This unfortunately occurred, to no avail, during the process of developing the sixth-eighth grade SEEd standards with regard to evolution and climate change, in particular. 

Let me be clear: Every major scientific organization in the country — indeed, around the world — is on record as firmly asserting the scientific credibility of evolution and anthropogenic influence on climate change. 

Science teachers have a professional responsibility to teach science topics as understood by the scientific community, as both the National Science Teachers Association and its state affiliate, the Utah Science Teachers Association, recognize. Furthermore, the UtSTA adamantly feels nonscience topics have no place in science classrooms.

State science standards play an important guiding role. 
Read more... 

Source: Deseret News

If you enjoyed this post, make sure you subscribe to my Email Updates!

What AI can really do for your business (and what it can’t) | InfoWorld

Photo: Isaac Sacolick
"Artificial intelligence, machine learning, and deep learning are no silver bullets. A CIO explains what every business should know before investing in AI" according to Isaac Sacolick, author of Driving Digital: The Leader’s Guide to Business Transformation through Technology.

Photo: InfoWorld

How can you tell whether an emerging technology such as artificial intelligence is worth investing time into when there is so much hype being published daily? We’re all enamored by some of the amazing results such as AlphaGo beating the champion Go player, advances in autonomous vehicles, the voice recognition being performed by Alexa and Cortana, and the image recognition being performed by Google Photos, Amazon Rekognition, and other photo-sharing applications.

When big, technically strong companies like Google, Amazon, Microsoft, IBM, and Apple show success with a new technology and the media glorifies it, businesses often believe these technologies are available for their own use. But is it true? And if so, where is it true?

This is the type of question CIOs think about every time a new technology starts becoming mainstream:
  • To a CIO, is it a technology that we need to invest in, research, pay attention to, or ignore? How do we explain to our business leaders where the technology has applicability to the business and whether it represents a competitive opportunity or a potential threat?
  • To the more inquisitive employees, how do we simplify what the technology does in understandable terms and separate out the hype, today’s reality, and its future potential?
  • When select employees on the staff show interest in exploring these technologies, should we be supportive, what problem should we steer them toward, and what aspects of the technology should they invest time in learning?
  • When vendors show up marketing the facts that their capabilities are driven by the emerging technology and that they have expert PhDs on their staff supporting the product’s development, how do we evaluate what has real business potential versus services that are too early to leverage versus others that are really hype, not substance?
What artificial intelligence really is, and how it got there  
AI technology has been around for some time, but to me it got its big start in 1968-69 when the SHRDLU natural language processing (NLP) system came out, research papers on perceptrons and backpropagation were published, and the world became aware of AI through HAL in 2001: A Space Odyssey. The next major breakthroughs can be pinned to the late 1980s with the use of back propagation in learning algorithms and then their application to problems like handwriting recognition. AI took on large scale challenges in the late 1990s with the first chatbot (ALICE) and Deep Blue beating Garry Kasparov, the world chess champion.

I got my first hands-on experience with AI in the 1990s. In graduate school at the University of Arizona, several of us were programming neural networks in C to solve image-recognition problems in medical, astronomy, and other research areas. We experimented with various learning algorithms, techniques to solve optimization problems, and methods to make decisions around imprecise data.

If we were doing neural networks, we programmed the perceptron’s math by hand, then looped through the layers of the network to produce output, then looped backward to apply the backpropagation algorithms to adjust the network. We then waited long periods of time for the system to stabilize its output.

When early results failed, we were never sure if we were applying the wrong learning algorithms, hadn’t tuned our network optimally for the problem we were trying to solve, or simply had programming errors in the perceptron or backpropagation algorithms.

Flash-forward to today and it’s easy to see why there’s an exponential leap in AI results over the last several years thanks to several advances.

First, there’s cloud computing, which enables running large neural networks on a cluster of machines. Instead of looping through perceptrons one at a time and working with only one or two network layers, computation is distributed across a large array of computing nodes. This is enabling deep learning algorithms, which are essentially neural networks with a large number of nodes and layers that enable processing of large-scale problems in reasonable amounts of time.

Second, there’s the emergence of commercial and open source libraries and services like TensorFlow, Caffe, Apache MXNet, and other services providing data scientists and software developers the tools to apply machine learning and deep learning algorithms to their data sets without having to program the underlying mathematics or enable parallel computing. Future AI applications will be driven by AI on a chip or board driven by the innovation and competition among Nvidia, Intel, AMD, and others.
Read more... 

Source: InfoWorld   

If you enjoyed this post, make sure you subscribe to my Email Updates!

Machine vision firm runs AI deep learning on Nvidia platform | Electronics Weekly

"MVTec Software, a Munich-based machine vision specialist, says it it now possible to run deep learning functions on embedded boards with Nvidia Pascal architecture" continues Electronics Weekly.

HALCON's deep learning now on NVIDIA Jetson boards

The deep learning inference in the latest version of the firm’s Halcon machine vision software was successfully tested on Nvidia Jetson TX2 boards based on 64-bit Arm processors.

The deep learning inference, i.e., applying the trained CNN (convolutional neural network), almost reached the speed of a conventional laptop GPU (approx. 5 milliseconds), says MVTec...

Photo: Dr. Olaf Munkelt
Dr. Olaf Munkelt, managing director, MVTec Software, believes the rapidly growing market for embedded systems requires corresponding high-performing technologies.

“AI-based methods such as deep learning and CNNs, are becoming more and more important in highly automated industrial processes. We are specifically addressing these two market requirements by combining HALCON 17.12 with the NVIDIA Pascal architecture,” said Munkelt.

Source: Electronics Weekly

If you enjoyed this post, make sure you subscribe to my Email Updates!

Why AI Could Be Entering a Golden Age | Knowledge@Wharton - Technology

The quest to give machines human-level intelligence has been around for decades, and it has captured imaginations for far longer — think of Mary Shelley’s Frankenstein in the 19th century. Artificial intelligence, or AI, was born in the 1950s, with boom cycles leading to busts as scientists failed time and again to make machines act and think like the human brain. But this time could be different because of a major breakthrough — deep learning, where data structures are set up like the brain’s neural network to let computers learn on their own. Together with advances in computing power and scale, AI is making big strides today like never before.

Photo: Frank Chen
After years of dashed hopes, we could be on the brink of large breakthroughs in artificial intelligence for businesses thanks to deep learning, says Frank Chen of Andreessen Horowitz. 

Photo: Knowledge@Wharton

Frank Chen, a partner specializing in AI at top venture capital firm Andreessen Horowitz, makes a case that AI could be entering a golden age. Knowledge@Wharton caught up with him at the recent AI Frontiers conference in Silicon Valley to talk about the state of AI, what’s realistic and what’s hype about the technology, and whether we will ever get to what some consider the Holy Grail of AI — when machines will achieve human-level intelligence.

An edited transcript of the conversation follows.

Knowledge@Wharton: What is the state of AI investment today? Where do we stand?
Frank Chen: I’d argue that this is a golden age of AI investing. To put it in historical context, AI was invented in the mid-1950s at Dartmouth, and ever since then we’ve basically had boom and bust cycles. The busts have been so dramatic in the AI space that they have a special name — AI winter.
We’ve probably had five AI winters since the 1950s, and this feels like a spring. A lot of things are working and so there are plenty of opportunities for start-ups to pick an AI technique, apply it to a business problem, and solve big problems. We and many other investors are super-active in trying to find those companies who are solving business problems using AI.

Knowledge@Wharton: What brought us out of this AI winter?

Chen: There’s a set of techniques called deep learning that when married with big amounts of data really gets very accurate predictions. For example, being able to recognize what is in a photo, being able to listen to your voice and figure out what you’re saying, being able to figure out which customers are going to churn. The accuracy of these predictions, because of these techniques, has gotten better than it has ever gotten. And that’s really what’s creating the opportunity.

Knowledge@Wharton: What are some of the big problems that AI is solving for business?

Chen: AI is working everywhere. To take one framework, think about the product lifecycle: You have to figure out what products or services to create, figure out how to price it, decide how to market and sell and distribute it so it can get to customers. After they’ve bought it, you have to figure out how to support them and sell them related products and services. If you think about this entire product lifecycle, AI is helping with every single one of those [stages].

For example, when it comes to creating products or services, we have this fantasy of people in a garage in Silicon Valley, inventing something from nothing. Of course, that will always happen. But we’ve also got companies that are mining Amazon and eBay data streams to figure out, what are people are buying? What’s an emerging category? If you think about Amazon’s private label businesses like Amazon Basics, product decisions are all data-driven. They can look to see what’s hot on the platform and make decisions like “oh, we have to make an HDMI cable, or we have to make a backpack.” That’s all data-driven in a way that it wasn’t 10 years ago.

Source: Knowledge@Wharton

If you enjoyed this post, make sure you subscribe to my Email Updates!

Overcoming The Challenges Of Machine Learning Model Deployment | BCW - Business

Yvonne Cook, General Manager at DataRobot UK summarizes, "Our societies and economies are in transition to a future shaped by artificial intelligence (AI)." 

Photo: BCW

To thrive in this upcoming era, companies are transforming themselves by using machine learning, a type of AI that that allows software applications to make accurate predictions and recommend actions without being explicitly programmed.  

There are three ways that companies successfully transform themselves into AI-driven enterprises, differentiating them from the companies that mismanage their use of AI:
  • They treat machine learning as a business initiative, not a technical speciality.
  • They have higher numbers of machine learning models in production.
  • They have mastered simple, robust, fast, and repeatable ways to move models from their development environment into systems that form the operations of their business.
Commercial payback from AI comes when companies deploy highly-accurate machine learning models that operate robustly within the systems that support business operations. 

Why Companies Struggle With Model Deployment
While hard data is scarce, anecdotal evidence suggests that it is not uncommon for companies to train more machine learning models than they actually put into production. Challenges to organisation and technology are in play here, and success requires that both are addressed. From an organisational perspective, many companies see AI enablement as a technical speciality. This is a mistake.

AI is a business initiative. Becoming AI-driven requires that the people currently successful in operating and understanding the business can also create tomorrow’s revenue and be responsible for both building and maintaining the machine learning models that grow revenues. To succeed, these business drivers will need collaboration and support from specialists, including data scientists and the IT team.

Machine learning models must be trained on historic data, which demands the creation of a prediction data pipeline. This is an activity that requires multiple tasks including data processing, feature engineering, and tuning. Each task, down to versions of libraries and handling missing values, must be exactly duplicated from the development to the production environments, a task with which the IT team is intimately familiar...

AI and machine learning offer companies an opportunity to transform their operations. IT professionals play a critical role in ensuring that the models developed by their business peers and data scientists are suitably deployed to succeed in serving predictions that optimise business processes. Automated machine learning platforms allow business people to develop the models they need to transform operations while collaborating with specialists, including data scientists and IT professionals.

Choosing an enterprise-grade automated machine learning platform will certainly make IT’s life easier. By providing guidance on organising for successful model deployment and the choice of appropriate technology, IT executives ensure their teams are recognised for their effective contribution to the company’s success as it transforms into an AI-driven enterprise.

Source: BCW

If you enjoyed this post, make sure you subscribe to my Email Updates!

Friday, December 15, 2017

How Machine Learning Can Help Identify Cyber Vulnerabilities | Harvard Business Review - Analytics

Ravi Srinivasan, vice president of strategy and offering management at IBM Security notes, "Putting the burden on employees isn’t the answer."

 Photo: Pedro Pestana/EyeEm/Getty Images

People are undoubtedly your company’s most valuable asset. But if you ask cybersecurity experts if they share that sentiment, most would tell you that people are your biggest liability.

Historically, no matter how much money an organization spends on cybersecurity, there is typically one problem technology can’t solve: humans being human.  Gartner expects worldwide spending on information security to reach $86.4 billion in 2017, growing to $93 billion in 2018, all in an effort to improve overall security and education programs to prevent humans from undermining the best-laid security plans. But it’s still not enough: human error continues to reign as a top threat.

According to IBM’s Cyber Security Intelligence Index, a staggering 95% of all security incidents involve human error. It is a shocking statistic, and for the most part it’s due to employees clicking on malicious links, losing or getting their mobile devices or computers stolen, or network administrators making simple misconfigurations. We’ve seen a rash of the latter problem recently with more than a billion records exposed so far this year due to misconfigured servers. Organizations can count on the fact that mistakes will be made, and that cybercriminals will be standing by, ready to take advantage of those mistakes.

So how do organizations not only monitor for suspicious activity coming from the outside world, but also look at the behaviors of their employees to determine security risks? As the adage goes, “to err is human” — people are going to make mistakes. So we need to find ways to better understand humans, and anticipate errors or behaviors that are out of character — not only to better protect against security risks, but also to better serve internal stakeholders.

There’s an emerging discipline in security focused around user behavior analytics that is showing promise in helping to address the threat from outside, while also providing insights needed to solve the people problem. It puts to use new technologies that leverage a combination of big data and machine learning, allowing security teams to get to know their employees better and to quickly identify when things may be happening that are out of the norm.

To start, behavioral and contextual data points such as the typical location of an employee’s IP address, the time of day they usually log into the networks, the use of multiple machines/IP addresses, the files and information they typically access, and more can be compiled and monitored to establish a profile of common behaviors. For example, if an employee in the HR team is suddenly trying to access engineering databases hundreds of times per minute, it can be quickly flagged to the security team to prevent an incident.

Source: Harvard Business Review 

If you enjoyed this post, make sure you subscribe to my Email Updates!