Product Model Staging and Release to Production

Copyright © by Peter Einstein

System Landscape

Even the smallest SAP environments usually include at least a Development environment, a more comprehensive Quality Assurance platform, and the Production system itself:

DEVgQAgPROD

But even this minimal landscape is not without challenges.

The SAP Correction and Transport System does a world-class job of managing Source Code and Customizing changes across the landscape, but when it comes to Master Data, users find themselves with a steaming hot bowl of SAP’s alphabet soup: LSMW, ALE, CATT, MDM. Adding to the fun, SAP configurable product (KMAT) models are comprised of multiple Master Data objects:

Application Link Enabling (ALE) is the method of choice to transfer model objects between R/3 clients and platforms. With CRM in the picture, completed models are then compiled into Knowledge Base Runtime Versions and moved to CRM via Middleware. Usually, the CRM environment mirrors the R/3 one:

Product changes obviously involve numerous master data transactions, ALE execution, Runtime Version generation, and Middleware processing; and throughout this cycle, release timing and process coordination remain critical. This complexity argues for a judicious balance between isolation of Development activities, and efficient data movement between Development and Production. At a minimum, product models must be validated on a robust, complete QA platform before release to Production. A minimal R/3 landscape might look like this:

Issues (life in an imperfect world):

  • The Development system can be unstable with respect to the code base, but must serve as the primary repository for the Company’s knowledge base.
  • Models require a rich set of Master Data that must now be maintained current in Development.
  • Refreshes from Production can undermine other development activity

Alternatively, some clients develop their models in the QA environment, and then move updates back to Development periodically:

This approach is also not without problems:

  • Modelers must use Engineering Change Management (ECM) cautiously, deploying future-dated releases, to avoid conflicts between active QA testing and Product Model development.
  • A true Sandbox is missing. Saved transactions on QA can lock out deletions of false start model objects, e.g., characteristics and values.
  • A true QA environment requires periodic refreshes. The Modeler’s environment remains at risk of disruption.

Gold Client Strategy

I have heard this strategy discussed for several years. A so-called Gold Client is maintained separately from Development and QA clients, so that it can be “pristine” and “transaction free” to facilitate model development.

Client 900 is separate from the true QA client. By disallowing transactions, product models can theoretically be changed more freely without using ECM or archiving. But there is a hidden fallacy here, because development on the Gold Client ultimately must move to Production. The modeler can be led down the primrose path by executing changes that are feasible on Gold, only to encounter ALE failures when the time comes to move the stuff to Production. For example, a characteristic can be deleted from a class without incident on the Gold Client. But this class must later move to Production via ALE where deletion is NOT permitted. Furthermore, ECM is unavoidable everywhere, because if target objects on Production have already been maintained using ECM, then source objects on Gold must be maintained similarly.

The nominal advantages of a Gold Client are isolation, stability, and focus. If you deploy a Gold Client, you’ll probably put it on QA, a natural repository for master data and a close match to Production, enhancing its verisimilitude for model testing. But a Gold Client is not a free lunch. The diagram hints at the effort involved to maintain so many instances with ALE. At my latest client, we copped out and went with a variation of Option 1. Are we perfectly happy? No.

ALE Processes for Configurable Product Models

At the November 2004 CWG meeting, SAP introduced the concept of “PLM Product Data Replication”. This solution promises to place all Configurable Product Model changes for a given release into a special Folder, and then permit a holistic transfer of the model from one instance to another. This would solve an enormous problem for modelers. The great ALE challenge is moving a time-dependent version of a multi-object Product Model between R/3 instances, while insuring that master data objects are ALE’d in the correct sequence to avoid errors on the target instance.

I would be delighted to see all of the following procedures and verbiage obviated by the PLM Workbench, but until then, here are a few guidelines for navigation through the ALE forest.

ALE Transactions

ALE provides a series of transactions that allow the user to select model-related objects on the source R/3 system, and send them to one or more target systems. These transactions are found under ToolsÒALEÒMaster Data Distribution. The user specifies the objects, e.g., Variant Table content or Characteristics; the Target System, e.g., VPRCLNT810; and the relevant Change Number (ECM) for the move. Note: All of this assumes the ALE landscape has already been configured, usually but not always by the Basis team.

Execution is a no-brainer. Supply the necessary data, and SAP does the rest: Based on message type (e.g., CHRMAS for Characteristics), SAP generates an IDOC on the outbound system and transfers the IDOC to the target system, where it is processed by a BAPI that replicates the standard maintenance transaction, e.g., CT04 for a Characteristic.

Sequence

The objects in an SAP Product Model are highly interdependent, and one thing builds on another. Creation of master data objects must proceed in a specific sequence or errors will occur, which must be resolved before proceeding to the next step. For example, a characteristic value must be created before it can be used in a Variant Table. Conversely, if a characteristic value is deleted while still in use by business rules, the rules will fail. The following maintenance sequence can usually be followed for the objects in the SAP Product Model, although exceptions may occur (preconditions, TYPE_OF selection conditions, etc):

  1. Characteristics
  2. Variant Tables
  3. Classes (structure, not content)
  4. Object Dependencies (simple dependencies)
  5. Materials
  6. Material Classification
  7. Dependency Nets (complex dependencies)
  8. Bills of Material
  9. Configuration Profiles (for affected KMAT materials)

The same sequence required for model creation must be enforced for ALE processes that basically replicate model maintenance transactions across SAP instances, (but are less tolerant of errors). This insures consistency on both the source and target instances.

It remains unclear how the PLM Replication tool will address sequencing, and particularly sequencing exceptions. Some form of iterative ALE error-handling seems unavoidable.

The Deletion Problem: Catch-22?

Consider the following scenario: The user must delete a characteristic value. Deletion occurs in roughly the opposite sequence from creation, by first removing any references to this value in Classification, then Variant Tables. Finally it is safe to remove the value from the Characteristic.

Let’s say the user completes this sequence in the Modeling client (to test/validate behavior) and then starts the ALE process. Adhering responsibly to the recommended sequence, the user first ALEs the Characteristic to a target system. But the Characteristic is already missing the deleted value. When the Characteristic hits the target system, objects previously classified with that value are rendered inconsistent; they contain a value that no longer exists. If the user now tries to ALE Classification data, SAP issues a hard error, because the system is understandably reluctant to update an already screwed up piece of master data.

My first fix to this problem was to ALE objects to the target system after each key step, but this defeated the purpose of a staging environment. I was pushing master data changes into Production before testing was even completed. File this one under “Bad Ideas”. So what is the answer? Is there an answer? How do I adhere to the rigorous sequential requirements of Product Modeling and support an ALE Staging concept? Do I develop directly in Production? Do I follow my own bad practice of moving stuff to Production piecemeal, albeit in the right sequence? Do I make it okay by using a forward-dated ECM record to limit the risk?

In a multiple instance R/3 staging environment, it is not practical or desirable to execute tactical intermediate ALE transfers to production. The full model must be deployed and tested on each platform, and then moved as a whole.

Deletion Rules of Thumb:

· If deletions are involved, execute value deletions last of all, after first declassifying all affected objects. This means that if you are deleting AND adding values to characteristics, the affected characteristics will contain a superset of new and deleted values for a short period of time, on both the source and target systems.

· ECM is the only option. You cannot delete values at all if configured objects exist in the system, unless the deletion is time-stamped with ECM.

A few More ALE Tips

Use Spreadsheets

Moving a Product Model involves numerous master data objects, and ALE transactions meet this challenge by enabling the user to move multiple objects of one type (e.g., Characteristics) between SAP instances in one pass. Maintain a separate spreadsheet for every model release, with a section or tab dedicated to each object type, listing each object affected by the change. This provides an audit trail of the change, and enables simple cut/paste from the spreadsheet into ALE transaction screens. Objects that have been moved can be flagged to keep track of what has been moved, to help manage sequencing.

Tracking/Error Resolution

ALE automates execution of master data maintenance transactions on the target system, but the underlying BAPI version of each transaction is much more rigorous about data, and provides less feedback than the on-line equivalent. As you fire off ALE transactions, keep an eye on transaction BD87 “Status Monitor for ALE Messages” on the target system, to track the success or failure of ALE postings. On a bad day, BD87 will tell you to see the “System Log”. Use transaction SLG1 for object “CAPI”. SLG1 reports the actual error messages from transactional execution. Sequencing is a common culprit. If you correct the underlying problem, first try to reprocess the existing IDOC in BD87. This is better than sending a completely new IDOC from the source system.

Watch Out for Interface Design

When Product Models are moved across clients, the Interface Design is a separate component that must be moved independently. It may or may not work for you. If not, it may be easier simply to maintain Interfaces changes directly on the Target environment, from within transaction CU50 (yes, I know this is a “bad thing”).

Version Management for Configurable Product Models in CRM

Congratulations. You have finally gotten your R/3 Product Models into R/3 Production, and they even appear to function properly. Now comes the tricky part.

In a pure R/3 environment, the centralized ECM system is continuously aware of what content is in effect on any given day, and uses the appropriate content based on a date selected by the business transaction. As long as changes are managed under ECM, R/3 knows to present those changes only when they become effective, based on, for example, Document Creation Date, Customer Requested Date, or Material Availability Date.

In the world of CRM/IPC, the only way to inform the IPC about time-dependent content is to create a date-stamped Knowledge Base Runtime Version (KB Runtime, or RTV). Each KB Runtime Version contains the model’s state at a particular point in time, and a separate RTV is necessary for each time-dependent state of the model.

When both R/3 and the IPC are in use, you need rigorous controls to maintain synchronicity between the time-dependent changes deployed in R/3 with ECM, and KB Runtimes used by the IPC. But even if the models are maintained in perfect lockstep, meaning that for each date-dependent change executed in R/3, a matching Runtime Version is created in the IPC, the overall system landscape must still be configured carefully to insure that the KB version used by each system adheres to the client’s business policy.

This is as clear as mud without some examples. In a perfect world, a CRM order taken using the IPC will drop immediately to R/3 via Middleware, and the lockstep KB runtimes will return identical content, replication will succeed, and everyone goes home at five.

But what if the CRM order is delayed for a few days because of credit problems, but the model has changed in R/3? Which KB will CRM use, and which date will R/3 use to find the model?

What if there are no credit problems, but the R/3 ATP check performed in CRM returns a future date for BOM components? By default, CRM uses its own document creation date to grab the appropriate KB runtime version. Will R/3 use the same date?

If R/3 and CRM use different dates for KB lookup, and Sales Order BOM explosion results are different on the two dates, order replication will FAIL. And by the way, this is the standard, shipped setup in SAP.

Important KB Selection Controls

For the handful of intrepid readers still awake at this point, there are two essential tools in SAP that allow you to control the KB look-up process to insure proper synchronization.

In CRM: The CRM_CONFIG_BADI contains a SET_KB_DATE method. This allows an over-ride of the default approach of using the CRM document date.

In R/3: USEREXIT_CONFIG_DATE_EXPLOSION in Include MV45AFZD allows an over-ride of standard SD logic, to force a particular BOM explosion date.

Which control to use (if at all) varies based upon:

  • The Client’s business requirements and policies

  • System settings, like the CRM ßà R/3 integration Middleware scenario

  • Where the business process sits on the Bundle-to-Order ßà Engineer-to-Order continuum

Closing Remarks

This document does not address a pile of other Version Management issues. For example, when the velocity of change is high, Runtime Versions can proliferate to the extent that the IPC will have difficulty digesting them. What about archiving? What is the optimal balance between Knowledge Base Objects and the Runtimes they contain? Should we maintain a rolling set of RTVs, and overwrite them periodically, or create new ones for every ECM? Which is better: Class-based or Material-based KB Objects? Is it EVER okay to date-shift a Runtime Version?

Alas, a discussion of strategies for ongoing version management will have to wait. I have to get back to my regular job. And this is probably a good thing, because from the blunders I am likely to make in the future, maybe I’ll learn enough of value to share in a subsequent article.


Please send comments to Peter Einstein, at Peter.Einstein@sap.com.