Is The Asset Framework Worthwhile?

November 5, 2010 3 comments

Introduction

OSIsoft’s summer release of PI Server 2010 has signaled a paradigm shift from Tag Centric to Asset Framework (or AF) Centric. OSIsoft has been operating under a paradigm where the PI tag and tag searching was the standard data access method since the company started in the 1970s. Now OSIsoft wants you to use the Asset Hierarchy to find data.

This isn’t the first time that OSIsoft has offered an object hierarchy and tag aliasing; the PI Module database has been around for years, but it was an alternative method to search and not the recommended standard method that the AF is designed to be. Thus OSIsoft is asking customers to make a major shift in the way users access the data.

Is this major change justified or just a change we have to put up with?

So often we see software manufacturers ‘changing’ their applications in an attempt to justify a price increase or to help them maintain cash flow. What year did Microsoft Word and Excel upgrades stop being useful to you and become a hassle instead?

This release of the Asset Framework is a long awaited significant improvement to the PI System; it is well worth the effort required. How important is it? The benefits of the AF are so important that we were willing to take the risk of putting a newly released PI Server 2010 into production in a validated environment only months after its release!

Are we insane? Were we committing suicide?

No, we are not insane and as it turned out, we didn’t commit suicide. We’ve been working with OSIsoft since 1996, so we have a good understanding PI upgrades and releases. We also did a complete cloning of the existing production PI system and upgraded it to PI Server 2010 with all of the modules and client tools using the production data, graphics, SharePoint, and RtReports. We prototyped all aspects of it and captured screen shots for all server and client software installations.

Were we guinea pigs discovering issues with the new software?

Although I logged 10 or so tech support calls with OSIsoft during this prototype period, the issues were mostly security based configuration issues with only 1 software bug (a column length in an SQL table). However, our switch to PI OLE DB for custom software development uncovered an issue with custom applications locking up from a problem recycling resources that we have to wait until January for the next release of the PI SDK to fully resolve. Until then, we have a registry entry extending the time before releasing resources. The decision to use PI Server 2010 forced us into this PI SDK.

When you consider the complexity of multiple servers, access of services and applications from multiple systems, and the synchronization of all the layers, it went smoothly.

What about the Synchronization of the MDB and the AF DB?

Our biggest concern was the interface of old with new; the co-existence of the older module database with the new Asset Framework database. We were planning this in the Spring before the software was out and we didn’t know how much we’d be strattling old vs. new. It turns out that PI Batch and the RtReports still use the Module database and a 2-way synchronization needed to happen between the two databases.

Our PI implementation was done in 2 phases. Phase 1 in 2009 was intentionally limited to raw I/O without PI batch. We followed the OSIsoft Engineering plan and chose to wait for the new AF and not attempt to utilize PI batch in the first phase. Thus, we didn’t have anything in the module database before the implementation of PI Server 2010.

Unfortunately, PI Batch and RtReports still use the Module Database even with PI Server 2010 and one has to rely on the 2-way synchronization service to keep everything flowing. I’ve been pleased with the service; I can create new batches in the Module Database and magically it appears in the AF configuration and visa-versa. There have been no issues with it.

We believe the synchronization of the 2 databases was the best possible juncture between old and new. The clients see only the new world and the remaining ‘under the hood’ applications of Batch and RtReports continues to operate until OSIsoft is ready to move them. We understand the value of phasing in a new architecture and OSIsoft chose a good implementation point.

What is the downside?

The PI System has become more complex and requires more server instances and more packages to work seamlessly together. Add the tighter, more complex security schemes of the latest Microsoft operating systems and you have some issues to overcome.

Our ‘biggest’ problem was setting up appropriate security. Is the AD account that’s running the application pool in SharePoint authorized to access the PI server and to access the AF SQL server database? How about RtReports that uses PI web services to talk to the AF? Don’t forget the Manual Logger access, the PI OLE DB access, etc, etc! Where do you set all that? The good news is that OSIsoft Tech Support knows even if I overlooked it in the installation procedures, so I wasn’t stuck very long.

Obviously, the downside is far less than the upside or I wouldn’t be writing this article!

So what benefits justify this risk?

The full implementation of the Asset Framework across the entire PI platform provided the following benefits:
1. Major time savings
2. Ease of use for end-users and PI administrators
3. Ease of maintenance
4. Increased accessibility to the data and data structures
5. Integrated end-user security

The use of AF templates and template-based graphics reduced 100 potential process unit graphics down to 9 equipment template graphics based on unique equipment types. As a result, the graphics are easier to maintain, easier to implement a standardized look and feel, and a tremendous time savings. As the company expands to additional buildings, areas, process units, and equipment, one simply uses the existing templates and graphics with the addition of more elements in the AF that reference the templates.

It solved an age old problem!

Every implementation of a data historian has faced the dilemma of what to call the data points; what is a good tag naming convention. Mostly because the instrument engineers were the first to work with a new PI system, the tags were typically named after the instrument tags on the P&ID diagrams, which is only helpful to a small subset of the ultimate PI users.

With the AF, you can have your cake and eat it too! Go ahead, name the tags based on the P&ID diagrams, but then create equipment templates with attributes that use English-like naming (ex. ‘Vessel Weight’ instead of WIT1312.PV). Now engineers can search by tag name and upper management can navigate an asset tree or equipment hierarchy to locate a data point with a nearly intuitive interface. That is a big deal!

But you say Tag Aliasing is nothing new!

Correct, it is not! OSIsoft has been offering some form of tag aliasing since the 1990s when the ISA industrial standards were defining Batch (S88) and Enterprise (S95) data access. I was a member of both committees and a heavy proponent of these standards and more universally accepted tag naming since the 1990s, but I’ve waited until August, 2010 to commit to it…. Why? The answer is SQL Server!

The partnership of Relational and Time Series data structures

Until now, the data storage mechanisms of the batch database and the module database were not robust enough to become the primary method for accessing information. Early on it was a proprietary database, then Microsoft Access, but now it is the industrial strength MS SQL Server that is the backbone which can handle the load.

In 1998, I was breaking a cardinal rule by presenting at the OSI conference: ‘Synergy of PI and SQL Server’, the concept of relational and time series data working together. Now OSIsoft has not only adopted that philosophy, but made it central to its design!

With the use of SQL Server, the doors swing wide open to let in a nearly infinite set of possibilities of how to better utilize time series data and the PI application.

Summary

Without the 15 year history with OSIsoft, the attention to detail that OSIsoft developers apply and their world-class Tech Support infrastructure, we would never have considered such an undertaking. OSIsoft was open and accurate in their recommendations throughout this process, which made all the difference.

With good planning and prototyping, the move to AF is worth your while. Each implementation phase with a PI system adds a new layer of functionality (instrument I/O -> Calculations -> Batch -> Analysis -> Integration -> etc), you realize a higher level of functioning capability that translates into a higher ROI for your long-term PI investment; even with the additional cost for PI Server 2010, you will see a good return on your investment. Don’t stop at the first implementation layer; take full advantage of PI Server 2010.

Thank you OSIsoft, keep it up!

Rich Winslow is a co-owner of Automated Results Computer Consulting, which is an OSIsoft PI Partner with 15+ years experience with OSIsoft PI and 30+ years manufacturing solutions experience. Learn how Automated Results can help you get a higher return for your PI investment.

Relational Database vs. Process Historian for process data; use BOTH!

July 5, 2010 4 comments

We frequently get asked to compare a relational database for storing time series data versus a true Process Historian, so I wanted capture our viewpoint.

The first hurdle is to dispell any rumors that I’m against Relational Databases; I’ve been pushing for the use of relational databases for process data since the mid 1990s because I was dealing with Batch records, thus I’m for relational databases. Back then I was looking to run a OSIsoft PI Data Historian alongside a Microsoft SQL Server.

I’ve run into enough cases where a relational database will not work for time series data based on a combination of the following:

- I/O rate
- # of tags
- Volume of data
- Uncompressed or poor exception spec tuning of input data

People often ask what the compression savings is between a relational database and a properly tuned Swinging Door Algorithm in a Process Historian. We see between 1:1000 and 1:5000 difference, thus a 1 gigabyte Process Historian database is a 1 to 5 terabyte database in a relational database, which is significant.

One might argue that this is unusually large and that most implementations don’t get that large. In January, I moved process data from a small lab with just 200 tags in a relational database to a Process Historian for a customer because the relational database was choked after 2 years and 5 terabytes of data which reduced down to 5 gigabytes. The reason it choked was that the relational database was not properly indexed and because the control system didn’t define realistic exceptions to filter insignificant changes (ex. system noise). Of course you say that it isn’t the fault of the relational database that the input was poor and the database wasn’t tuned and you are right. However, we run into that all the time:

1) Customers don’t have experienced to tune the exception specs in the control system or the ‘data people’ are not allowed touch the control system (ex. validated environment).
2) Customers have basic database, but not a true DBA that knows how to tune. They ‘never thought’ it would get that big.

So compression is significant and having the right tool with the right expertise is important, but it’s not the whole picture…

It is true that storage is cheap, but a relational database is spending too much time storing AND it is taking too much time retrieving as well. When you have a continuous process feeding new data AND you want to view data, having a 1000 to 1 difference in system noise data vs. useful information is significant. It’s a classic calculus problem of trying to fill the cylinder while you are emptying it… can you keep up. You may be able to keep up in the beginning, but as time goes on the Relational database is strangling itself even with a good index; the index will keep growing. A process historian stores data in individual files by time range and thus the file size is fixed (ex. 500 mb for 10,000 tags) and the performance is consistent regardless of how many years of data. As a result, a 100,000 tag system can request 365 days of data for 20 tags and have the same response time with 1 yr or 10 yrs of data; you won’t get the same response even with a well tuned relational database.

A Process Historian is not just a niche time series database, it is an application that offers functionality beyond the just data storage:

1) Time series trending with the intelligence to not display every value but sufficient inflection points for it to be accurate and readable, the ability to zoom in/out and scroll; normal x-y chart will not do.
2) Historical Process Graphics displays
3) Real-time screen updates
4) Statistical Process Control
5) Accurate totalization
6) Access to multiple data historians
7) Interfaces that can record data from 250+ different control systems (OPC and non-OPC)
8) N-way redundancy and high availability (redundant data collection and redundant servers)

Relational databases can be used for storing time series data in select cases, but it has not been difficult to justify the cost of a process historian when you consider the additional benefits of process historian as application and not just a database. If you cannot justify the cost of a process historian and a relational database makes sense for now and the long-term, then by all means use the relational database… it is an awesome tool!!!

But Process Historians like OSIsoft PI are learning to provide BOTH types of database in their application offering. OSIsoft is using Microsoft SQL server for its Analysis Framework that holds the asset management hierarchy and relational type batch history data AND they have shifted from a tag-centric application to an asset-centric application with a relational database at the core.

So consider using both in a best of breed model and it becomes a win-win scenario!

—————————————————————–

Bio: Rich Winslow, co-owner of Automated Results Computer Consulting

Rich has been working with Hierarchial, Network, and Relational Databases since 1980, Data Historians since 1990, and has been a process historian consultant since 2000 working with Oil Refineries, Pharamaceuticals, Paper Mills, Power Plants, and Chemical Manufacturers of all sizes. He was a member of the S88 Batch standards committee and one of the original members of the S95 Enterprise Data standards committee.

How to access a PI Audit log file?

The PI application allows you to turn on audit trails of specific PI server subsystems.

These audit trails could be a requirement of certain industries, but I’d recommend that all PI systems enable these audit trails so you have something to reference back to if you have questions. Disk space is cheap these days and this information can be valuable when someone is asking questions about the integrity of the PI system.

I thought that it would be worthwhile to demonstrate how to get an audit trail log file dump out of the OSIsoft PI application, since it isn’t intuitive and I couldn’t find a complete working example.

You enable one or more bits in the database to turn on/off audit trails.

Database Subsystem Bit Value to Enable
Hex Decimal
Point Database PIBasess 0 1h 1
Digital State PIBasess 1 2h 2
User Database PIBasess 5 20h 32
Group Database PIBasess 6 40h 64
Trust Database PIBasess 7 80h 128
Snapshot PISnapss 28 10000000h 268435456
Archive PIArchss 29 20000000h 536870912
All Databases 0-31 FFFFFFFFh -1

You can enable them using the PICONFIG utility.

d:\pi\adm>piconfig
(Ls – ) PIconfig> @table pi_gen,pitimeout
* (Ls – PI_GEN) PIconfig> @mode create,t
* (Cr – PI_GEN) PIconfig> @istr name,value
* (Cr – PI_GEN) PIconfig> EnableAudit,-1
*> EnableAudit,-1
* (Cr – PI_GEN) PIconfig>

Because these files are locked when the PI server is running, you need to declare a start and end of backup to copy the file before you can access it. The following sequence will safely grap a copy of the archive subsystem audit trail and dump a specific time range:

d:
cd pi\adm
piartool -systembackup start -subsystem piarchss
copy d:\pi\log\piarchssaudit.dat d:\pi\adm\archaudit_Copy.dat
piartool -systembackup end -subsystem piarchss
pidiag -xa d:\pi\adm\archaudit_Copy.dat -st “1-apr-2010″ -et “2-apr-2010″ >> ArchAudit.log

Below is a sample of the output:

0.1312296

29.27819

120

Should You Go to the OSIsoft 2010 PI User Conference?

March 21, 2010 Leave a comment

In 1996, I attended my first OSI PI User Conference as a new user to understand what modules and features to target, but I came away with a whole lot more! Besides finding out that I was the only one of 74 people with PI Batch intending to implement it on Windows NT (vs. VMS), I found out that these User Conferences were excellent from many perspectives. One is able to:

  • Meet OSIsoft employees willing to talk about PI and how I could get the most out of it.
  • Meet other customers who share how they implemented PI and where they benefited.
  • Review 3rd party vendor products and services hands-on.
  • Learn about functionality, ways to implement, and how they all fit together
  • Learn what’s new now and where they intend to take their product offering.

This might sound like a typical conference justification, however considering that I shy away from such events and that I’m frugal and I still consider it worthwhile, then this is a strong endorsement. In fact, each year we’ve been able to measure the benefits from contacts and what we learned that exceeded the cost of sending 2 people from the east coast to the conference.

Initially, I underestimated the value of staying in touch with the OSIsoft staff. Besides putting a face with a voice on the other end of the phone, they offer their own personal insight into the product and talk about the goals they are working towards. It’s great to hear OSIsoft’s marketing presentations, but it really comes together for me when you hear how their employees apply it.

I’ve followed the evolution of the PI application since 1996 and I’m impressed with how they are able to grow the product and especially how they’ve expanded into new applications for their product (ex. data centers, power meter readings). They’ve kept the product current and improved performance throughout the evolution of operating systems and software development environments. However, OSIsoft’s evolution meant that it was important for me to stay in-touch with their evolution so we were taking advantage of the what we’d already paid for and plan ahead to utilize what was coming. I found that the OSIsoft User Conference was the best way to learn what I needed to do with PI in the next 12 months and the next 2 years.

In light of me writing about the OSIsoft Regional Conferences last Fall, people have asked me if the Regional Conference can replace the annual trek to the User Conference. My answer is: attend both the regional and annual events. Send your strategic staff with long-term vision to the User Conference and send your hands-on PI administrators and key users to the regional conference so more people get the message and can run with it.

When people say their company cannot afford to send them, I respond that they probably cannot afford NOT to send them. Getting the most out of the initial PI cost and then the annual TSA cost should be enough justification, but the annual gains from better utilizing what you already own can make it an easy decision… see you at the OSIsoft 2010 User Conference!

Unable to start PI server on Window 7: access denied

February 5, 2010 4 comments

With Vista, Windows 2008, and now Windows 7, the UAC (user access control) layer gets in the way of manually starting a OSISoft PI server’s services because of privileges. If you open a command prompt (Run and CMD) from a standard user account, then fire off the C:\PI\adm\PISrvStart.bat command, you will receive an access denied error 5 for each Net Start command within the batch procedure.

To get around this, find the Command Prompt under Start > All Programs > Accessories and right click on it, then select ‘Run as Administrator’. Proceed to run the c:\PI\adm\PISrvStart.bat command as before and it should work correctly and start each of the Windows Services related to PI.

Would PI Batch be useful for your company?

January 31, 2010 Leave a comment

Many companies hear the word Batch and figure it’s not for them, especially industries with continuous manufacturing (ex. power). After a little education, companies come around and soon tap into the power of visualizing their processes from a Batch perspective.

Through my 30 yrs working with Batch, my participation on the ISA S88 Batch standards committee, and exposure to a diverse set of industries, I have found that every industry can improve their processes and product by incorporating PI Batch into their analysis.

In simplistic terms: PI Batch is a convenient way to select time ranges in order to compare time-series data. These comparisons can be different time ranges in the same process/machine or between different processes/machines.

  • Product Quality from a run last night against a known good run (golden batch) last month
  • Product yields in the Summer versus the Winter
  • Paper machine A against Paper machine B
  • Night shift against Day shift
  • Product complaints against formulation A versus formulation B
  • Profitability with manager A versus manager B

It’s the creative implementation of this simple concept that makes it so powerful! PI Batch can be applied in ways that people never thought of:

  • Tanks: Mixing a Vessel of material, blending products together
  • Silos: Stratification of dry ingredients in a hopper (ex. pellets)
  • Roll Stock: Making a roll of paper, cutting up a roll, splicing a roll
  • People: Shifts of workers (days vs evenings vs nights), workers at different locations
  • Raw Materials: Usage of a given raw material lot (a rail car of coal, color dye lot, supersack of silver nitrate)
  • Production metrics (downtimes, production holds, throughput, recycle volumes)
  • Quality metrics (product grades, reject rates, sampling plans)
  • Financial metrics (profit level, expenses, overtime)
  • Seasonality (Spring, Summer, Fall, Winter)
  • SOPs: Standard Operating Procedure revisions
  • Maintenance Cycles: Runtime of a piece of equipment (turbine generator online, a motor running until the next rebuild)
  • Product Yields: Which field produce was grown in
  • Timeframes: Day of the week or other fixed timeframe (24 hr production of power)
  • Suppliers: Which raw material supplier or source location

Most people are surprised to hear that you can run multiple batch schemes on the same equipment. You can have one batch configuration for equipment, another for seasonality, another for shifts, and another for raw material supplier… all against the same part of your process. You simply choose the ‘particular flavor of batch’ that you are interested in looking to compare; you can switch between different batch definitions based on your finds:

  • Start by looking at production rates and/or yields
  • Switch to downtime analysis
  • Cross-reference to shift analysis
  • Verify there isn’t additional influence from seasonality

PI Batch is easy to configure and if you don’t find a particular configuration useful, you can remove it. Not only do you save time quickly find a set of time ranges, it can help you visualize your product and processes to identify issues, which provide a mechanism for continuous improvement.

So it doesn’t matter if you are a traditional batch process, a transitional process with both batch and continuous, or a totally continuous process, everyone can benefit from the use of OSISoft PI Batch.

Give us a call now: 828-862-6667 x300 or send an email

Technorati verification: YHZY9VHFY4FY

Was the OSISoft PI Regional Seminar Worthwhile?

October 31, 2009 Leave a comment

Our customers at Automated Results regularly ask us if it is worthwhile to attend the Regional Seminars around the country and the Annual Users Conference in California. The fact that we have attended each year since 1996 is a pretty good testimony, but it’s worth elaborating on.

I attended the Oct 27th, 2009 OSISoft Regional Seminar in Raleigh, NC; it was a 1 day seminar, yet it provided a tremendous amount of value:

  • Information on new PI application offerings and releases
  • Understanding where OSISoft is heading so we can plan ahead
  • Keeping in touch with OSISoft personnel, our customers, and other OSISoft customers
  • Sharing new ideas and uses for PI with others
  • Learning how others are using PI

As our customers are asking to be more visible online and move their information more real-time, it was interesting to hear Ron Kolz and Julie Zeilenga’s talks about OSISoft’s expanding their presence in real-time applications like Microsoft Data Centers and Front-end data loading from commercial and residential power customers. It’s great to see OSISoft’s marketing and engineering innovations paying off, which re-affirms their leadership position in the real-time industry.

Personally, I look forward to the OSISoft Product Roadmap, which describes what software has been released with a description of their features and how to apply it. It also covers what is coming up in the future so we can understand how it all comes together. Jay Lakumb talked about the gamut of products, one products that caught my interest was the Analysis Framework (PI AF), PI Notification. Being a prior member of the ISA S88 Batch Standard committee in the early 1990s, I strongly believe that PI customers can benefit from mapping their equipment (asset) and associated tags in PI. The PI AF is a replacement for the original Module Database. PI AF has scaleable performance because they utilize MS SQL server and now populating the AF does not use up PI tags, which was a deterrent in the past. I’m very excited about PI Analysis Framework; we have a customer that has been holding off defining their equipment hierarchy until the AF is released, so I’ll be digging into it immediately.

I’m also interested in the Web Services offering to communicate with PI. We have been implementing MOSS Sharepoint, been working with eCommerce sites, and been implementing web applications on the Internet (Business-to-Business, multiple locations within a corporation, and retail). We’ve found web services to be an efficient way to communicate between different Internet-enabled applications. We are also finding these applications could benefit from access to time-series information, so I’m looking forward to prototype the PI web services.

Others at Automated Results have been involved in implementations of High Availability (HA) and I’ve listened to their initial challenges and work-arounds, but it was great to hear Chris Coen talk about the OSISoft HA offering and how it works, what PI functionality isn’t HA compliant yet (ex. PI SDK, PI Batch), and their prioritized implementation plan to get all modules to be HA compliant. N-way High Availability PI databases and interfaces is a huge challenge, but essential functionality for an ever-increasing number of applications (ex. power, pharmaceutical, etc).

Julie Zeilenga talked about their recent release of the BacNet interface and it’s application in the Microsoft Data Centers. The PI OPC interface is a powerful interface to communicate with a large number of control systems; we implement a lot of these interfaces at the industrial sites. I think the BacNet interface is equally as powerful to interface with utilities and computer/network equipment. It’s application is amazing!

The information sharing and face-to-face networking is extremely valuable as an OSISoft partner and as an OSISoft customer. Both the annual user’s conference and the regional seminars are time well-spent. If you can’t afford the user conference this year, then make sure you get to a regional event, it is a small investment in your total cost of ownership in your PI implementation.

Thanks for stopping by and taking the time to read our blog

Check out our OSI PI experience

If you have questions or comments, we’d be thrilled to hear from you! Let us know if there is something you think we should discuss.

Follow

Get every new post delivered to your Inbox.