Skip to content →

The role of PowerShell in IT-aware Services and Applications

Several months ago, I happened to share a nice conversation about the potential of PowerShell as part of the IT/Enterprise Architecture. Then I discovered that the benefits of architecting IT-aware Applications are still widely unknown or misunderstood. That’s the reason I would like to share my thoughts on this subject pointing out the special role that PowerShell can play in this field.

IT-aware Services and Applications

IT-aware Services and Applications incorporate the necessary instrumentation so that IT and Operations Teams can control, monitor, diagnose and operate them using the same semantics that the business uses in addition to the classic IT constructs and abstractions.

These utilities allow IT and Operations to be aligned more efficiently with the business because the own business concepts are completely integrated with the environment and the tools that both IT and Operations use to do their work.

This Instrumentation Stack can be break down into different components or services that need to be implemented inside the applications:

  1. Activity Tracking and Performance Monitoring. In the Microsoft world, this is usually implemented as Logging Facilities (Log4net, Enterprise Library Logging Application Block, etc.), Performance Counters and ETW Providers.
  2. Command and Control: PowerShell in the Microsoft ecosystem.

If you are an Enterprise Architect, you may be familiar with this Functional Blocks as they are common cross-cutting concerns in many application designs.

The level of integration of those services with the business application will define how rich the business semantics are. Let’s see several examples of different integration strategies:

AppIntegrationStrategies-20101223-4

The global value is always bigger than the individual contribution of each single part because of the interconnection opportunities with the whole IT ecosystem. For example, IT can coherently automate the whole service stack using the existing set of skills and tools and, similarly, connect that service with the rest of the ecosystem using a business aligned semantics.

The vision

As you may have probably imagined, the end goal is to be able to define your Infrastructure as Code in the context of a complex and heterogeneous Data Center. This way, IT teams become Software Engineers for the Data Center in the same way that traditional Software Factories do it for Application Development.

In this ideal world, your IT organization becomes and behaves as an internal Cloud Provider that delivers:

  • very short response times and business aligned time to market.
  • high and measurable quality levels.
  • high and measurable productivity levels.

This is all possible because everything is software running automated procedures and reacting to high level orders like:

  • provision more capacity to these services and use it once available.
  • start, stop, pause, restart payments or valuations.
  • if purchases/hour are greater that X, then, increase capacity.
  • tell me how much capacity do I need to meet this business response time.
  • migrate all the business to this other Data Center with zero impact.
  • etc.

In the Open Source Software world, some Cloud Providers –both public and private– do implement these concepts and seem to succeed on delivering these promises.

I am talking about the same ideas, but delivered throughout more mature and enterprise-ready Instrumentation Stack (Performance Counters, ETW, PowerShell, etc.). Taking again a previous visual example, in this model “IT can coherently automate the whole service stack using the existing set of skills and tools and, similarly, connect that service with the rest of the ecosystem using a business aligned semantics”.

Naturally, the development of this vision is subject to evolution both on the implementation and the maturity levels. In the end, it doesn’t seem very realistic, and may be unnecessary, to achieve everything at once.

Many IT management products claim to do this. Here is where it happens to arise very passionate debates about coupling strategies: Product vs. Language or Technology Stacks.

Coupling issues

Unfortunately, greater business integration also comes at a price of bigger coupling risks that should be managed accordingly. Here I would consider two different coupling risks:

  • Business Coupling risks. I wouldn’t consider this as a risk. If you don’t want to couple with yourself and your applications to obtain the value you are pursuing, you have a bigger problem. Know yourself first.
  • Technology Coupling risks. Here we find, again, usual considerations about coupling strategies.

AppIntegrationStrategiesComparison-20101223-2

This is always a tough and shady topic. But, in this particular context of the IT-Aware Data Center, here is how I see the game today:

Given the fact that you always have to couple with something in order to leverage some sort of value, I tend to consider a better strategy to couple with a Language or a Technology Stack rather than a Product or a particular Supplier. Open Standards are key while Open Source is good, but not required.

Of course, I can be plain wrong and, undoubtedly, this strategy is neither perfect nor trouble-free. But, I find that this approach has the following benefits:

  1. Languages or Technology Stacks have a longer and a more stable Life Cycle when compared to Products or Suppliers.
  2. You can choose whichever Product or Supplier and manage their Life Cycles at will, as long as they are properly connected to your underlying and strategic Language or Technology Stack.

In the Microsoft world, the Instrumentation Stack -Activity Tracking + Performance Management + Command and Control- is proprietary at some point of the implementation but:

  • It does implement known Design Patterns. In fact, the whole stack is designed as an Infrastructure Service. This way, applications can plug into it an get a common set of services for free: Session Management, Remoting, Security, Data Collection, Visualization and Tooling.
  • It is extensively documented and maintained.
  • It provides you with extensive, growing and powerful tooling.
  • Thanks to the efforts of the Common Engineering Criteria, the Instrumentation Stack is an ever growing and built-in by default feature throughout the whole Microsoft ecosystem.
  • Open Standards don’t make sense on every layer of the Instrumentation Stack. Lower level instrumentation is an ad-hoc component in every system I know: it is a mesh of Adapters with hard-wired relationships at best. Nevertheless, Microsoft instrumentation interoperates really well when interoperability is an issue. In fact, it is based on Open Standards (.NET, XML, WBEM, WS-*, SNMP, etc.) on the upper layers of the architecture.
  • It does integrate nicely with Applications in order to create those business semantics and successfully connect them to the rest of the IT ecosystem.

If you add to this list the intrinsic values that PowerShell itself delivers, you can now understand why PowerShell is playing an increasing role in the IT management game: from the Hardware to the Application. These are also the same reasons why I consider the Microsoft Instrumentation Stack the Strategic Asset to couple with.

The IT landscape

Let’s see how a PowerShell-instrumented IT stack looks like today so that we can have a more realistic view of how the ecosystem is evolving:

As we see further progressing in Common Engineering Criteria compliance -or similar approaches- it will be very likely that we have more alternatives on each layer available. What it starts to become clear is that PowerShell is nicely positioned as an IT-friendly Command and Control platform for the whole Data Center.

Anyway, one thing will remain true: the Application/Service layer will always be the “last mile” and, therefore, the hardest component to instrument. It is not the implementation what it makes it difficult, but the commitment and the architectural framework of such an strategic decision. This instrumentation might be hard, but the one with the promise of delivering true business semantics.

Non-Microsoft platforms

You must be thinking that taking this vision into non-Microsoft platforms looks, somehow, an impossible endeavor. Well, although not an easy task, it isn’t.

The biggest issue is the diversity on the Instrumentation Stack. Differences are both significant and philosophical: from Domain Specific Tools -found on most Unix-like systems- to System wide Infrastructure Services -the approach you will find in Microsoft products.

Excess in diversity makes standardization pretty hard, although you can consider different Integration and Transition strategies that fit to your particular scenario. In any case, and whichever strategy you might decide, you will have to take into account:

  • the nature and diversity of your System Software ecosystem: OSs, databases, etc.
  • your Application ecosystem and their associated technologies.
  • your global Life Cycle map.
  • your Short, Medium and Long Term goals.

Exploring the different integration and transition strategies for non-Microsoft platforms can be exciting but, somehow, endless. We may talk about this on a future posts ;-). Anyway, I would suggest to use the Microsoft Framework as a reference to learn from. Microsoft Instrumentation Stack has a very elegant and powerful design that can inspire you in your particular projects.

Conclusion

The Instrumentation Stack is a key player on the IT alignment efforts with the Business. It also plays a key role on the Internal/Private/Enterprise Cloud strategy. No matter if we call it Cloud or not: every company in the world is constantly pursuing productivity improvements if it wants to survive in a competitive market; and this framework is a clear productivity booster.

Microsoft proposition for this Infrastructure Services is not new and, as we have seen with PowerShell, is experiencing a significant market adoption by its own merits.

Understanding not only the Technology, but also the Architecture of the Instrumentation Stack as long as your whole IT ecosystem will help you in shaping the roadmap that fits to your needs.

Fortunately, this is a never ending learning journey. Therefore, there are no unique answers, and I won’t claim to be right. My only goal has been only to show you the landscape and the benefits of taking into account an Instrumentation Stack, particularly the Microsoft Implementation.

PowerShell seems to be nicely positioned as an IT-friendly Command and Control platform for the whole Data Center. As long as this position becomes more solid, which it looks very likely, we could consider it as a platform to standardize Data Center solutions upon… or not. What do you think?

License

Creative Commons License
Except where otherwise noted, ThinkInBig by Carlos Veira Lorenzo is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Published in Architecture Automation Software Engineering Technology