TechHub: Digital Transformation in Chicago, Transform 2017 Keynote Kristi Woolsey & Ransomware in Industrial Networks

Chicago: The New Silicon Valley

Chicago’s technology community has been booming in the last five years.

A new KPMG report lists Chicago as a hopeful candidate to be the next international hub for innovation due to its talent and infrastructure, according to Inc. Magazine.

The city fosters a developing startup ecosystem, which raised more than $1.7 billion in funding in 2016, and ranks among the best in the country for growth of tech jobs.

Whereas Silicon Valley has shifted away from innovation to a lifestyle, landing tech leaders like the Snapchat CEO on the front page of a major fashion magazine, Chicago fosters a culture of no-nonsense leaders based on the value of hard work and dedication, according to Inc.


The rooftops at Wrigley Field. Source: CNNMoney

The digital revolution is making more millionaires than ever, pooling together the most brilliant minds on the plant to cultivate an innovative mindset that focuses on creating value.

Join GrayMatter and GE Digital for an interactive session on digital transformations in Chicago on Wednesday, June 21.

Designed for you to learn and share your thoughts, there will be a chance to network with peers and watch the Cubs play the Padres at the exclusive rooftop suites at Wrigley Field.

Nate Arnold, VP of Manufacturing Digital Technology of GE Digital and GrayMatter CEO Jim Gillespie will reveal specific insights on the best paths to success in digital manufacturing and reveal what’s gone wrong in past cases.

Space is limited, so be sure to reserve your spot today.

Reserve My Spot

Inc. Magazine’s Best Workplaces of 2017 AnnouncedInc. Best Workplaces of 2017 GrayMatter

Inc. Magazine recently announced the 2017 best workplaces in their annual list, featuring GrayMatter.

Inc. recognizes the top companies to work for, asking thousands of employees about the places they work. For GrayMatter, transforming operations and empowering people includes our own people.

Transform 2017: MAYA’s Kristi Woolsey as Keynote Speaker

GrayMatter is proud to announce that Transform 2017, the annual GrayMatter conference in Put-in-Bay, Ohio, will feature MAYA Design’s Kristi Woolsey as a prominent keynote speaker this year.

Transform 2017 Kristi Woolsey

Kristi Woolsey, MAYA Design

Woolsey is a dynamic speaker with a long track record of providing insightful presentations that put today’s initiatives into a future context.

Most of her talks revolve around behavioral strategy, the technique of influencing employees and customers towards desired behaviors including greater loyalty, innovation, collaboration, and productivity. These talks and workshops challenge business leaders, HR, IT, and CRE professionals to create lasting branded experiences that increase employee and customer engagement, driving improved business outcomes.

GE Digital’s Sr. Service Director Paul Casto will also present as a keynote speaker. Presenting the power of brand new applications that can be leveraged in the cloud to provide extremely fast, easy to access information about all your assets, he will help cut through the chaos and decipher your digital priorities and first steps.

Transform 2017 is a three day conference in Put-in-Bay, OH, from August 1 – 3.  Professionals in all verticals who are passionate about operational technology and transforming into digital, industrial operations should plan to attend.

View our full agenda and register before June 9 to get the early bird special.

View Agenda

Industrial Networks at Risk of Ransomware Attacks

In recent weeks the news has been filled with reports of the newest malware, WannaCry ransomware, which has infected more than 200,000 systems worldwide.

An alert was published by the Industrial Control Systems Cyber Emergency Response Team, a division of Homeland Security, which advised operations to update security software, create backups, train employees and configure access controls to block unauthorized access to sensitive systems.

Industrial environments are particularly susceptible to these types of attacks for several reasons, including the improper segmentation of IT and OT networks, unpatched Windows machines and the presence of SMB on devices hosting HMIs, engineering workstations, historians and other systems, according to Security Week.

Phil Neray, VP of Industrial Cybersecurity at CyberX, believes that patching the vulnerability is not easy in the case of ICS.

“It’s worth noting that many of the SCADA applications embedded in our electrical grid and manufacturing plants were developed years ago and are tethered to older versions of Windows — so the fix isn’t going to be easy,” Neray said.



3 Lessons from the Unsung Hero: HMI/SCADA

Unsung hero (noun). One who does great deeds but receives little or no recognition for them.

While in the midst of a changing industry, data revolution, and shift to focusing on operational efficiency, it’s no surprise that something like the HMI/SCADA landscape could be overlooked as the driving force behind efficiency.

In fact, in a recent guest post for ISA Interchange, Matt Wells, general manager of Automation Software Solutions at GE Digital declared HMI/SCADA as the unsung hero of eliminating unplanned downtime.

“Some engineers have an “if it ain’t broke, don’t fix it” attitude, without realizing that continuing to use obsolete systems to collect, connect, and act upon vast amounts of production data from anywhere will inevitably lead to higher, hidden costs associated with big repairs and unplanned downtime,”  said Wells.

Take Microsoft’s Windows XP that was launched in 2001. A large amount of control systems were launched with the platform because it was secure and stable. But as with many forms of technology, it became outdated and is no longer supported– making it extremely vulnerable to malware.

But Wells said this vulnerability and others can be avoided, as long as organizations reap the benefits from new technologies. Unplanned downtime can be avoided with HMI/SCADA.

Here are three important lessons from Wells’ post:

HMI/SCADA: The Gateway to the Industrial Internet

According to Wells, many HMI/SCADA developers have embraced OPC Unified Architecture, meaning their software can communicate with hundreds of different devices. With the strengthened security and multi-platform support, leveraging the Industrial Internet is made possible.

In other words, using the data that’s being collected by your SCADA allows you to identify more areas for efficiency improvement– whether that’s faster troubleshooting, lower operational costs, or increased energy savings. It’s the pathway to successfully leveraging the Industrial Internet.

Test Now, Save Later

No one is immune from security threats. And running through risk assessments once a year just isn’t going to cut it.

Wells urges the importance of regular risk assessments incorporated into schedule. Of course, the frequency of assessments might differ based on industry and plant apps. However, by starting small with a conservative goal, this can be accomplished.

According to Wells, there are a few things to remember when making strides toward more regular risk assessments.

  • Update your software with the latest patches
  • Employ secure technologies and methodologies
  • Follow the guidelines for maximizing security provided by your software partners

Don’t Forget to Upgrade Your HMI/SCADA

We couldn’t discuss avoiding unplanned downtime in a plant without acknowledging the importance of upgrading a SCADA.

“This can be addressed in a step-by-step approach that will not only increase uptime, but provide a range of benefits for your processing facility while preparing your plant for the future, a future where the Industrial Internet of Things is a reality,” said Wells.

If your software is more outdated than it should be, make a plan now to upgrade securely. Wells assures there are many benefits for modernizing your system.

  • Enhancing the security of your systems
  • Avoiding obsolescence
  • Leveraging the Industrial Internet and Real-time Operational Intelligence
  • Benefiting from new functionalities
  • Being able to mobilize your application – quickly and easily

And here’s a few other notable stories from the week:

Mapping the Road to 5G: The Network for the Internet of ThingsSONY DSC

Information Age said this week that the move from 4G to 5G is inevitable– especially in an age fueled by data, video, and mobile browsing.

According to the article, 5G is being defined by new radio access technology, multi-layered networks that can handle high throughputs and data volumes at very low latency.

“5G was born not only because of the user applications demanding high throughputs and high bandwidth, but also the increasingly popular trends of connected smart devices that will flood global markets in the near future,” said Ben Rossi of Information Age. “With the increase in wearable technology, motion-based sensors, voice command and eye movement sensors, 5G use cases are being driven by low latency and high-reliability requirements of these sensor-connected Internet of Things (IoT) devices.” 

Reliability will be vital in the 5G/IoT network, and “insights-driven, customer-centric service level assurance will play a big part in ensuring reliability and the promise of 5G networks.”

Cars, Trains, and the IoT

According to Hannah Augur of Dataconomy, the public transit in London made various strides toward the Internet of Things about a year ago, and other major cities are exploring new ways to use technology in this area as well.Unsung-Hero-HMI/SCADA

While driverless cars are exciting and often talked about, there are many other examples of connectivity in modern travel. But Augur suggests three specific areas where we might actually see this change take shape:

  • Usage-based insurance
  • Micronavigation
  • Connectivity

Read the full article for more insight on future “smart transit”.

HMI Interfaces: A Renaissance

The human-machine interface (HMI) is in the midst of a “rejuvenation,” thanks to touch-screen technology seen in smartphones and tablets, according to Al Presher of Design News.

By throwing powerful microprocessors and connectivity options into the mix, there are more possibilities than ever.

“The emergence of mobile devices, smartphones, and tablets is having a greater impact on HMI development,” said Jen Vacendak, product support engineer and trainer for B&R Industrial Automation. “As new engineers enter the picture, they are accustomed to using those types of devices and we’ll be seeing more of a merger between the two technologies.”

Presher predicts a key trend in the HMI renaissance to be remote monitoring, and the ability to view screens on mobile devices–providing valuable insight into any issues, any where, any time.

“Other interesting developments are the continuing miniaturization and improved power efficiency of electronics,” said Presher. “We may see the HMI mounted on the surface of the enclosure or the machine with just small hole(s) in the panel for power and communications instead of having to cut a rectangular opening in a panel to mount the HMI device.”


Media We Link To:

“The Unsung Hero of Eliminating Unplanned Downtime: HMI/SCADA” – Matt Wells, ISA Interchange 

“The rise of 5G: the network for the Internet of Things” –  Information Age

“Cars, Trains, and the Internet of Things” – Dataconomy 

“Human-Machine Interfaces Are Undergoing a Renaissance” – Design News

Extreme Weather is Inevitable: Is the Success of Your Outage Management Process?

Photo courtesy

Upon reflecting on America’s history, it would be difficult to overlook the devastation and continuing challenges brought on by extreme weather events such as Hurricane Katrina in 2005.

Photo courtesy

Photo courtesy

It was one of the costliest natural disasters in history- racking in $80 billion in total losses and damages.

A Category 3 hurricane, perhaps even lesser known, reached the Long Island and New England region more than 70 years ago and caused more than $306 million worth of damage, a large sum of money now, but was considered even more exorbitant in 1938.

That’s the thing about extreme weather- one of the few constants in life, it doesn’t discriminate between continents or decades.

You can’t stop the rain, wind, or snow. But you can improve processes for outage management and damage assessment before weather strikes.

Outage management for utilities were a largely manual process in the past, and still are for some today, with little to no standards for collecting data. This leads to quite the inefficient restoration process for utility executives who strive for reliability and minimal outage duration for their customers.

The core of the utility business model is to provide reliable, resilient services within an expected range of time and power quality, after all.

By combining the traditional steps of outage management and restoration processes with recent technology such as smart meters and advanced metering infrastructure (AMI), outage management can be improved even further.

From the utility perspective, benefits include monetary improvements from cutting costs in operation- fewer truck rolls, less wear and tear on vehicles and even less overtime for the crew.

But customers benefit, too. Due to the increasing reliance on digital technologies and electricity for productivity from residential, commercial, and industrial customers, a faster restoration time could mean economic benefits for them as well.

In order to improve the outage management life-cycle, it’s helpful to realize that an integrated approach is most efficient- one that incorporates sources of information like smart meters, social media/switch status with a standardized, digital platform that collects information and sends that data back into the right hands.

Here’s how outage management can be taken to the next level with technology and innovation:

Outage Identification

Supervisory Control and Data Acquisition (SCADA) devices are currently being used at the most basic level combined with customer call-ins as the first step of the lifecycle, outage identification.

But what if utilities adopted AMIs and smart meters as a means to gain valuable insight, throughout the entire distribution grid? Smart meters allow “last-gasp” messages to be sent to the outage management system and therefore can identify both long-term and momentary service disruptions.

And don’t forget about the power of going mobile. Smartphones and other mobile devices that access the Internet can also provide valuable information right on hand.

Damage Assessment & Work Orders

Many utilities are using time- and resource- intensive processes for the damage assessment piece of the process, with no standard for reporting or cataloging changes for restoration.

Photo courtesy

Photo courtesy

This dated practice can be updated with prioritized work orders—field crews and damage assessors are most successful when they are ready for deployment as soon as the extreme weather event passes.

While the outage management system is a sufficient predictor of damage, it’s also critical that crew members relay any updated information on asset status that they observed in the field. This practice can also help develop localized ETRs for customers.

With the use of mobile computing, field crews can now gain even more access to data such as mobile mapping, network models, and visualization with a simple, graphical interface.

Work Order Execution

In order to execute an accurate ETR, utilities need to efficiently track and deploy work orders from the field. But, as mentioned before, it’s equally important to reflect any changes made in the field.

Automating work order execution to mobile devices has proved successful for numerous utilities– even offering safety benefits.

At Western Power Corporation, the implementation of GE’s mobile platform led to a 50% reduction in switching incidents.

“Mobile switching has automated the communications between field crews and the control center [and has] reduced errors and bottlenecks caused by high call volumes, as communications now occur electronically. This results in more coordinated communication between field and control, reduced delays from the control center, faster restoration times, real-time data being received from the field, more accurate restoration times and improved reliability [and] data accuracy,” said Western Power Corporation, in the “Best Practices in Outage Management” white paper by GE and GTM Research.

The mobile platform also allows for automatic network status updates as switching occurs.

Planning and Post-Event Reporting

Planning, whether in a financial or operational sense, is crucial for effective resource utilization.

Photo courtesy

Photo courtesy

Current outage management systems can model what-if scenarios that aid utilities in mapping out what resources are needed and where they can be obtained.

Standardizing data collection processes not only creates efficiency in the damage assessment step of the outage management life-cycle, it also allows for an easier post-event reporting.

Of course, while the evolving, modern outage management systems have begun to shape process for the better, the systems are only as strong as the data is accurate. This means information must be a bi-directional, constant flow between the field and operators– regardless of whether it’s real-time or post-event.

Want to read more about improving the outage management process? Download the white paper by GE and GTM Research.


Everything Old is New Again in SCADA Architecture

I can recall from my youth a day where my cousin and I hung out at my uncle’s engineering office on a Saturday.  Maybe it was several days, but as I get older the days seem to meld together.

While my Uncle worked on whatever project was eating up his weekend, he placed us in nearby cubicles and logged us into the facilities mainframe computer.


Zork Screenshot

I found myself lost in countless hours of Zork.

For those of you not fortunate enough to have experienced Zork, it was one of the earliest interactive fiction games available for computers.

They just don’t make games like that anymore – for better or worse.
Shifting back to present time, it dawned on me as I was presenting yet again on a thin and virtual architecture I put together for a client, that everything old was new again.

[su_list icon=”icon: chevron-circle-right” icon_color=”#990000″]Related Reading


We are once again supplying operators terminals to access a cabinet full of computing resources.  And for several good reasons.

In today’s data driven world, companies are looking for solutions that are fault tolerant while being easy to maintain and configure.  At Gray Matter Systems, we have developed what we like to call the Virtual and Thin Architecture.

By being Virtual and Thin, what we are doing is leveraging Thin Client Technology in a VMWare Environment.  This gives us several advantages.

What is Virtualization?

When I said the word server, most people would think of a physical box, by virtualizing that box, we can host many software based servers within it.  This reduces our overall footprint on several levels – physical space, power consumption, and unused resources.



Using VMWare also allows us to create a Highly Available and/or Fault Tolerant environment by leveraging VMWare technologies such as VMotion, VMotion High Availability, and Fault Tolerant.

VMotion allows you to manually move servers from running on one host to another dynamically while the server remains running.

VMotion High Availability automates this process such that when a Host Box fails, any server that was running on said Host Box will restart on another Host Box in the Virtual Center.

Typical reboot time for a Virtual Server is around 30 seconds as compared to the 5-10 minutes of a typical physical server – thus minimizing downtime.

Couple that with iFix Enhanced Failover and the end user will not see a loss of data as the redundant server would take over operation as the primary server automatically restarted on another host box.  Within 1 minute you’d be fully redundant (hardware and software) again.  VMWare fault tolerant can take the hardware redundancy one step further and start a shadow instance of a server on another Host Box in the Virtual Center.

In this case, when a Host Box fails, any server that was running Fault Tolerant that was on that host box would see its corresponding shadow go active near instantly – thus no downtime at all.

With our servers centralized, we can now look at how our clients connect to the data contained in our databases (which are running on the servers).

For the last 20 years, thick clients have been the prevalent technology used to connect to our data servers.  This required a physical PC to be located and maintained at each operator station or any station that needed access to our data.  We now recommend the use of Terminal Servers and Thin Clients.

Thin Clients Are Not PCs

They are devices that generally have no moving parts that power a keyboard, mouse and monitor and establish an RDP session to a Terminal Server (Citrix or Web as well).  The actual processing and configuration for the clients are then maintained on the Virtualized Terminal Server running in our VCenter with Virtualized Data Servers.

Thus all of our computing resources are centrally located and maintained.  Replacing a Thin Client can be done in a matter of minutes.

Even less if you use Thin Client management software such as ACP’s Thinmanager.



Our typical rollout these days for a distributed client server SCADA system consists of the following:

  • Redundant set of iFix SCADA servers
  • Redundant set of Terminal Server running iFix Terminal Server and  ACP Thin Manager
  • Proficy Historian
  • Optional Proficy Webspace

These servers in turn are then accessed by thin clients out on the plant floor as illustrated below:

The overall solution provides for ease of maintenance and a centralized configuration.  All the while allowing you to more efficiently use your processing and storage resources while using less power and having a smaller overall footprint.

Lastly, it provides significant uptime performance and fault tolerance.

It’s funny how as times change, aspects and strategies of the past become vogue again.

In many ways we are back to that “mainframe serving the client terminals” architecture of the past.  Even bell bottoms made a comeback in the near past.  As far as myself, I’m just waiting for someone to develop an open-world version of Zork for the PS4 or XBox.

One can hope.

Contact GrayMatter

Get in touch with us!