Quantcast
Viewing all 26 articles
Browse latest View live

Data Warehouse – Anatomy of DW/Reporting Deployment

 

In my last post I went over the management pack synchronization process that brings over MP’s from Service Manager and how those MP’s drive the structure, data, and reports for data warehouse and reporting. Once those MP’s are synchronized between Service Manager and the data warehouse, we need to get the data and/or reports deployed for user consumption.

Sequentially, deployment works in this way (see figure below):

  1. Once all identified MP’s are synchronized with DW, MP sync triggers the report deployment workflow
  2. Since DWStagingandConfig is the final destination of the MP’s that have been synchronized, the deployment workflow will query the DWStagingandConfig database for any new or changed reports to deploy or any reports to remove.
  3. The deployment workflow will then publish any new or updated reports to the SQL Server Reporting Services server via the SSRS webservices.
  4. SSRS then stores the reports and appropriate metadata.
  5. Schema deployment workflow is triggered by MP sync
  6. Once again, information that is driving schema changes is retrieved from the DWStagingandConfig database based off the newly synchronized MP’s that are driving the changes.
  7. The schema changes are then deployed to the DWRepository.
  8. Any necessary changes to Extract, Transform, and Load modules are made to the DWStagingandConfig database

MP’s that contain only Service Manager specific information will not trigger the deployment activities to execute. They will only be triggered for new DW/Reporting specific elements. In my next post I will dive into what is Extract, Transform, and Load (ETL), its benefits, and why deployment makes changes to it.

 

Image may be NSFW.
Clik here to view.
DeploymentAnatomy

Image may be NSFW.
Clik here to view.

Introduction to the Data Warehouse: Custom Fact Tables, Dimensions and Outriggers

So you’ve deployed the data warehouse and tried out the reports but now you want to want to make it your own. You may be trying to recreate some reports you’ve been using forever and now need to run them against the Service Manager platform, or perhaps you simply want to take full advantage of the customizations you’re doing in the Incident or Change Management solutions and want those changes to flow through to the reports. Either way, if you’re stumped on how to proceed our latest posts from the Platform team will help you extend and customize the Data Warehouse to enable the in-depth analyses you’re aiming for.

Danny Chen, one of the Developers on our common Platform team, did a great job writing up how to create fact tables, dimensions and outriggers. If you’re not familiar with data warehousing principles, I’ll provide some clarity as to what these terms mean and how they apply to Service Manager below. If you understand the principles well enough and are chomping at the bit to dig into the details, here are the links:

1. A Deep Dive on Creating Relationship Facts in the Data Warehouse

2. A Deep Dive on Creating Outriggers and Dimensions in the Data Warehouse

Principles behind the Platform: Dimensional modeling and the star schema

The data warehouse is a set of databases and processes to populate those databases automatically. At a high level, the end goal is to populate the data mart where users will run reports and perform analyses to help them manage their business. We keep this data around longer in the warehouse than in the CMDB because it’s usefulness for trending and analysis generally outlives it’s usefulness for normal transactional processing needs.

A data warehouse is optimized for aggregating and analyzing a lot of data at once in a lot of different, unpredictable ways. This differs from transactional processing systems which are optimized for write access on few records in any given transaction , and those transactions are more predictable in behavior.

To optimize the data warehouse for performance and ease of use, we use the Kimball approach to dimensional modeling. What this means to you is that tables in the DWDataMart database are logically grouped into subject matter areas which resemble a star when laid out in a diagram, so these groupings are often called “star schemas”.

  1. In the center of the star is a Fact table. Fact tables represent relationships, measures & key performance indicators. They are normally long and skinny as they have relatively few columns but contain a large number of transactions.
  2. The fact table joins to Dimension tables, which represent classes, properties & enumerations. Dimension tables usually contain far fewer rows than fact tables but are wider as they have the interesting attributes by which users slice and dice reports (ie status, classifications, date attributes of a class like Created Date or Resolved Date, etc).
  3. An outrigger is a special kind of dimension table which hangs off another dimension table for performance and/or usability reasons.

Generalized representation of a star schema:

Image may be NSFW.
Clik here to view.
Star Schema

Consider what a star schema for a local coffee shop might look like. The transactions are the coffee purchases themselves, whereas the dimensions might include:

  1. Date dimension (to rollup the transaction by both gregorian and fiscal calendars)
  2. Customer dimension (bought the coffee)
  3. Employee dimension (made the coffee)
  4. Product dimension (espresso, drip, latte, breve,  etc etc…. and this could get quite complicated if you track the details of Seattleites drink orders)
  5. Store dimension
  6. And more

What measures might the fact table have? You could easily imagine:

  1. Quantity sold
  2. Price per Unit
  3. Total Sales
  4. Total Discounts
  5. etc

IT processes aren’t so different from the local coffee shop when it comes time to designing your dimensional model. There are a set of transactions which happen, like incident creation/resolution/closure which produce some interesting and useful metrics (time to resolution, resolution target adherence, billable time incurred by analysts, duration in status, etc).

When thinking about extending and customizing your data warehouse, think about the business questions you’d like to be able to answer, read up on dimensional modeling for some tips on best practices, and then check out Danny’s posts on creating fact tables, dimensions and outriggers for the technical know-how.

 

And of course, we’re always here to help so feel free to send me your questions.

Image may be NSFW.
Clik here to view.

Create a report model with localized outriggers (aka “Lists”)

If you've watched my Reporting and Business Intelligence with Service Manager 2010 webcast and followed along in your environment, you may have unintentionally created a report which displays enumeration guids instead of Incident Classification strings, like below. Not too useful. In this post I'll tell you the simple way to fix your report model to include the display strings for outriggers for a specific language, and in a follow on post I'll share more details as to how to localize your reports and report models.

You may be wondering what happened. This is because we made a change in SP1 to consistently handle outrigger values which removed the special handling we had for our out of the box enumerations in outriggers. If you're now wondering what outriggers are, read up on the types of tables in data warehouse in my last post in which I provided the service manager data warehouse schema.

Here's the screenshot of the report we need to fix, the rest of the post will explain how to fix it.

 Image may be NSFW.
Clik here to view.

Replace table binding in the Data Source view with Query binding

Rather than including references to the outriggers directly (in the screenshot below the outriggers are IncidentClassificationvw, IncidentSourcevw, IncidentUrgencyvw, and IncidentStatusvw) we'll replace these with named queries.

To do this, you simply right click the "table" and select Replace Table > With New Named Query.

 Image may be NSFW.
Clik here to view.

You then paste in your query which joins to DisplayStringDimvw and filter on the language of your choice. Repeat for each outrigger.

SELECT outrigger.IncidentClassificationId, Strings.DisplayName AS Classification

FROM IncidentClassificationvw AS outrigger INNER JOIN

DisplayStringDimvw AS Strings ON outrigger.EnumTypeId = Strings.BaseManagedEntityId

WHERE (Strings.LanguageCode = 'ENU')

 Image may be NSFW.
Clik here to view.

Create & publish your report model

To create a simple report model, right click the Report Models node in the Solution Explorer (right pane) and select Add New Report Model. Follow the wizard, selecting the default options.

 Image may be NSFW.
Clik here to view.

If you want to clean it up a little, double click the Report Model, then select IncidentDim on the left.

Scroll down the properties in the center and you'll notice there is now a Role added to the IncidentDim named Classification Incident Classification, along with an Attribute named Classification. This is because using outriggers to describe dimensions is an industry standard approach and SQL BI Dev Studio understands that these outriggers should essentially get added as properties directly to the Incident dimension for the easiest end user report authoring experience.

The attribute is populated directly by the column I mentioned you should not use in reports, so you should select and delete that attribute from your model. You may also rename the Role "Classification Incident Classification" to a more user-friendly name like "Incident Classification" if you'd like to.

 Image may be NSFW.
Clik here to view.

Now save, right click your report model and click Deploy.

Create a report to try out your new report model

Open up SQL Reporting Services Report Builder (below screenshots are using Report Builder 3.0). If you haven't gotten a chance to check it out yet, here's a good jump start guide.

 Image may be NSFW.
Clik here to view.

Follow the wizard, select your newly published report model:

 Image may be NSFW.
Clik here to view.

Drag & drop your Incident Classification and Incidents measure. Hit the red ! to preview.

 Image may be NSFW.
Clik here to view.

Drag & drop to layout the report

 Image may be NSFW.
Clik here to view.

Continue with the wizard, selecting the formatting options of your choice. If you would like, you can then resize the columns, add images and more. For our quick and simple example, though, I'm going to intentionally leave formatting reports for another post. If you've been following along, your report should now look like this:

 Image may be NSFW.
Clik here to view.

Go ahead and publish to the SSRS server under the /SystemCenter/ServiceManager/ folder of your choice to make the report show up in the console.

 Image may be NSFW.
Clik here to view.

 

Image may be NSFW.
Clik here to view.

How long does the Service Manager Data Warehouse retain historical data?

The short answer is that we keep data in the warehouse for 3 years for fact tables and forever for dimension and outrigger tables. Antoni Hanus, a Premier Field Engineer with Microsoft, has put together the detailed steps on how to adjust this retention period so you can retain data longer or groom it out more aggressively.

DISCLAIMER: Microsoft does not support direct querying or manipulation of the SQL Databases. 

To learn more about the different type of tables in the data warehouse, see the blog post which describes the data warehouse schema.

To determine which are the fact tables and which are the dimension tables you can run the appropriate query against your DWDataMart database

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Fact'

 

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Dimension'

NOTE: Microsoft does not support directly accessing nor managing the tables (dimensions, facts nor outriggers).

Instead, please use the views as defined by the ‘ViewName’ column in the above query.

Fact Table Retention Settings

There are 2 two types of retention setting in the data warehouse:

1) Global - The global retention period (set to 3 years by default) which any subsequently created fact tables use as their default retention setting.

2) Individual Fact– The granular retention period for each individual fact table (uses the global setting of 3 years, unless individually modified).

Global:

The default global retention period for data stored in the Service Manager Data Warehouse is 3 years so all OOB (Out of the box) Fact tables use 3 years as the default retention setting.

Any subsequently created fact tables will use this setting upon creation for their individual retention setting.

The default Global setting value is 1576800, which is 3 years (1576800 = 1440 minutes per day * 365 days * 3 years)

This value can be verified by running the following SQL Query against the DWDataMart database:

select ConfiguredValue from etl.Configuration where ConfigurationFilter = 'DWMaintenance.grooming'

Individual Fact Tables:

Individual fact tables will inherit the global retention value upon creation, or can be customized to a value that is different from the default global setting. 

OOB Individual Fact tables that were created upon installation, can also be individually configured with a specific retention value as required. 

All of the Fact tables in the Database can be returned by running the following query against the DWDataMart Database:

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Fact'

An example of an OOB fact table returned is  ActivityStatusDurationFact which has a warehouseentity ID of 81;

Image may be NSFW.
Clik here to view.
clip_image002

The corresponding retention setting for this Fact table is stored in the etl.warehouseentitygroominginfo table, so if we run the following query, the ‘RetentionPeriodInMinutes’ field will show us the individual retention configured for that particular table
Query:

select warehouseEntityID, RetentionPeriodInMinutes from etl.WarehouseEntityGroomingInfo where WarehouseEntityId = 81

Result:

Image may be NSFW.
Clik here to view.
clip_image004

A SQL Statement such as the following could be used to update an individual fact table to an appropriate value:

Use DWDatamart

UPDATE etl.WarehouseEntityGroomingInfo

SET RetentionPeriodInMinutes = [number of minutes to retain data]

WHERE WarehouseEntityId = [WarehouseEntityID of Fact table to update]

Image may be NSFW.
Clik here to view.

KB: Service Manager 2010 Reports fail to include current data

Image may be NSFW.
Clik here to view.
image
Here’s a new Knowledge Base article we published this morning. This one discusses an issue where reporting data isn’t current due to a hung MPSyncJob or DWMaintenance job.

=====

Symptoms

In System Center Service Manager 2010, you may experience one or more of the following:

-Reports do not have current information
-ETL jobs are not running
-The MPSyncJob or DWMaintenance job is running for an extended period of time (e.g. a more than a few hours)

Cause

This can occur when either the MPSyncJob or DWMaintenance job is hung or stuck.

NOTEWhen the MPSyncJob or DWMaintenance job is running, it disables the ETL job. Because of this, no ETL jobs can run until MPSyncJob or DWMaintenance job finishes running. When either job is hung, no data will move to the DWDatamart for reporting.

NOTE The MPSyncJob can take up to 6 hours for intial deployment. However after initial deployment the MPsync job should complete quickly, usually in a few minutes. If the MPSyncJob is taking several hours or longer that there is a problem that needs to be addressed. This expected time also applies to the DWMaitnenance job - more information on resolving those issues can be found at http://technet.microsoft.com/en-us/library/hh542403.aspx

Resolution

Before troubleshooting, you will first need to understand what a Batch, Module, and Workitem are.

Batch:
Data Warehouse will create the batch for each process when it wants to run the extract job. For example:

Select * FROM Infra.Process Where ProcessName like '%Extract%'

Data Warehouse will create the batch for this process

Select * FROM Infra.Batch Where ProcessId = <ProcessID from 1st query>

Module: Module is defined to get the data from classes. Multiple modules are bound together to run in batches. One module runs one Workitem. To find out the modules process that the 1st query is running, use the query below:

Select * FROM Infra.ProcessModule Where ProcessId = <ProcessID from 1st query>

Workitem: Workitem is the smallest execution unit. Once all workitems are finished then the batch will be considered as finished

Select * FROM Infra.WorkItem Where BatchId = <BatchID from 2nd query>

Compare this query result with the result of 3rd query result (number of rows returned by query) will be same as every module runs in separate Workitem.

Troubleshooting Steps:

MPSyncJob hung:

When we see that the MPSyncJob hung, we first need to figure out why it hung. You can do that using the steps below:

1. Get the MPSyncJob batchid using the PowerShell command Get-SCDWJob –JobName MPSyncJob.
2. Issue the query to DWStagingAndConfig using the batchid from step #1.

select * from infra.WorkItem where BatchId = '...'

3. If no workitem is in status 7 (i.e. all of the workitems are either 6 or 3), and the job has been running for a while, then we need to restart the “System Center Management” service.

NOTE Find all the available status by using the query below.

Select * FROM Infra.Status

4. Otherwise, check the CustomInfo column of workitems whose statusid = 7

If it says waiting for acquiring lock, that means currently the DWMaintenance job is running. Wait until that job is complete, then the focus on why the DWMaintenance job is hung. If it says waiting for deployment to complete, then we need to run the following:

select * from DeploySequenceView where DeploymentStatusId != 6

If the query returns something where the status is waiting or not started or running, that means the deployment cannot complete. Next we need to check if deployment and execution processcategory are enabled using he following query:

Select * from infra.ProcessCategory

From here, we want to make sure the Deployment and Execution process categories are enabled. If they are enabled, from this point on the investigation should focus on MP deployment. Change the underlying table using this query

select * from DeploySequenceView where DeploymentStatusId != 6

This only returns those with a failed status, or if there are MPs that depend on the failed MP, their status should be Waiting. From the view DeploySequenceView definition in DWStaingAndConfig, we can see deploymentstatusId is calculated via the following tables:

dbo.DeploySequence,dbo.DeploySequenceStaging,and Infra.WorkItem

The work around for this is to modify the deploysequencestaging table using the commands below.

NOTE Please complete a backup of the DWStagingAndConfig database before running the query below.

use DWStagingAndConfig
update DeploySequenceStaging
set StatusId = 2
where StatusId = 4

The work around above is just to allow the MPSyncJob job to move on; it doesn’t resolve the MP failure to deploy issue.

DWMaintenance job hung

In general, when we see the DWMaintenance job is hung, we would troubleshoot this much the same way we would a hung MPSyncJob:

Get the DWMaintenance batchid by using the PowerShell command Get-SCDWJob –JobName DWMaintenance

Issue the following query to DWStagingAndConfig using the batchid from first step:

select * from infra.WorkItem where BatchId = '...'

From here, continue on just as you would for a hung MPSyncJob.

More Information

Prevention:
Monitor the Jobs running status weekly using the PowerShell command Get-SCDWJob. Use PowerShell instead of the UI because from the PowerShell output you can see how long the job has been running.

=====

For the most current version of this article please see the following:

2703027 - Service Manager 2010 Reports fail to include current data

J.C. Hornbeck| System Center & Security Knowledge Engineer

Get the latest System Center news onFacebookandTwitter:

Image may be NSFW.
Clik here to view.
clip_image001
Image may be NSFW.
Clik here to view.
clip_image002

App-V Team blog: http://blogs.technet.com/appv/
ConfigMgr Support Team blog: http://blogs.technet.com/configurationmgr/
DPM Team blog: http://blogs.technet.com/dpm/
MED-V Team blog: http://blogs.technet.com/medv/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
SCVMM Team blog: http://blogs.technet.com/scvmm
Server App-V Team blog: http://blogs.technet.com/b/serverappv
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
System Center Essentials Team blog: http://blogs.technet.com/b/systemcenteressentials
WSUS Support Team blog: http://blogs.technet.com/sus/

The Forefront Server Protection blog: http://blogs.technet.com/b/fss/
The Forefront Endpoint Security blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

Image may be NSFW.
Clik here to view.

Data Warehouse – Anatomy of DW/Reporting Deployment

 

In my last post I went over the management pack synchronization process that brings over MP’s from Service Manager and how those MP’s drive the structure, data, and reports for data warehouse and reporting. Once those MP’s are synchronized between Service Manager and the data warehouse, we need to get the data and/or reports deployed for user consumption.

Sequentially, deployment works in this way (see figure below):

  1. Once all identified MP’s are synchronized with DW, MP sync triggers the report deployment workflow
  2. Since DWStagingandConfig is the final destination of the MP’s that have been synchronized, the deployment workflow will query the DWStagingandConfig database for any new or changed reports to deploy or any reports to remove.
  3. The deployment workflow will then publish any new or updated reports to the SQL Server Reporting Services server via the SSRS webservices.
  4. SSRS then stores the reports and appropriate metadata.
  5. Schema deployment workflow is triggered by MP sync
  6. Once again, information that is driving schema changes is retrieved from the DWStagingandConfig database based off the newly synchronized MP’s that are driving the changes.
  7. The schema changes are then deployed to the DWRepository.
  8. Any necessary changes to Extract, Transform, and Load modules are made to the DWStagingandConfig database

MP’s that contain only Service Manager specific information will not trigger the deployment activities to execute. They will only be triggered for new DW/Reporting specific elements. In my next post I will dive into what is Extract, Transform, and Load (ETL), its benefits, and why deployment makes changes to it.

 

Image may be NSFW.
Clik here to view.
DeploymentAnatomy

Image may be NSFW.
Clik here to view.

Introduction to the Data Warehouse: Custom Fact Tables, Dimensions and Outriggers

So you’ve deployed the data warehouse and tried out the reports but now you want to want to make it your own. You may be trying to recreate some reports you’ve been using forever and now need to run them against the Service Manager platform, or perhaps you simply want to take full advantage of the customizations you’re doing in the Incident or Change Management solutions and want those changes to flow through to the reports. Either way, if you’re stumped on how to proceed our latest posts from the Platform team will help you extend and customize the Data Warehouse to enable the in-depth analyses you’re aiming for.

Danny Chen, one of the Developers on our common Platform team, did a great job writing up how to create fact tables, dimensions and outriggers. If you’re not familiar with data warehousing principles, I’ll provide some clarity as to what these terms mean and how they apply to Service Manager below. If you understand the principles well enough and are chomping at the bit to dig into the details, here are the links:

1. A Deep Dive on Creating Relationship Facts in the Data Warehouse

2. A Deep Dive on Creating Outriggers and Dimensions in the Data Warehouse

Principles behind the Platform: Dimensional modeling and the star schema

The data warehouse is a set of databases and processes to populate those databases automatically. At a high level, the end goal is to populate the data mart where users will run reports and perform analyses to help them manage their business. We keep this data around longer in the warehouse than in the CMDB because it’s usefulness for trending and analysis generally outlives it’s usefulness for normal transactional processing needs.

A data warehouse is optimized for aggregating and analyzing a lot of data at once in a lot of different, unpredictable ways. This differs from transactional processing systems which are optimized for write access on few records in any given transaction , and those transactions are more predictable in behavior.

To optimize the data warehouse for performance and ease of use, we use the Kimball approach to dimensional modeling. What this means to you is that tables in the DWDataMart database are logically grouped into subject matter areas which resemble a star when laid out in a diagram, so these groupings are often called “star schemas”.

  1. In the center of the star is a Fact table. Fact tables represent relationships, measures & key performance indicators. They are normally long and skinny as they have relatively few columns but contain a large number of transactions.
  2. The fact table joins to Dimension tables, which represent classes, properties & enumerations. Dimension tables usually contain far fewer rows than fact tables but are wider as they have the interesting attributes by which users slice and dice reports (ie status, classifications, date attributes of a class like Created Date or Resolved Date, etc).
  3. An outrigger is a special kind of dimension table which hangs off another dimension table for performance and/or usability reasons.

Generalized representation of a star schema:

Image may be NSFW.
Clik here to view.
Star Schema

Consider what a star schema for a local coffee shop might look like. The transactions are the coffee purchases themselves, whereas the dimensions might include:

  1. Date dimension (to rollup the transaction by both gregorian and fiscal calendars)
  2. Customer dimension (bought the coffee)
  3. Employee dimension (made the coffee)
  4. Product dimension (espresso, drip, latte, breve,  etc etc…. and this could get quite complicated if you track the details of Seattleites drink orders)
  5. Store dimension
  6. And more

What measures might the fact table have? You could easily imagine:

  1. Quantity sold
  2. Price per Unit
  3. Total Sales
  4. Total Discounts
  5. etc

IT processes aren’t so different from the local coffee shop when it comes time to designing your dimensional model. There are a set of transactions which happen, like incident creation/resolution/closure which produce some interesting and useful metrics (time to resolution, resolution target adherence, billable time incurred by analysts, duration in status, etc).

When thinking about extending and customizing your data warehouse, think about the business questions you’d like to be able to answer, read up on dimensional modeling for some tips on best practices, and then check out Danny’s posts on creating fact tables, dimensions and outriggers for the technical know-how.

 

And of course, we’re always here to help so feel free to send me your questions.

Image may be NSFW.
Clik here to view.

Create a report model with localized outriggers (aka “Lists”)

If you've watched my Reporting and Business Intelligence with Service Manager 2010 webcast and followed along in your environment, you may have unintentionally created a report which displays enumeration guids instead of Incident Classification strings, like below. Not too useful. In this post I'll tell you the simple way to fix your report model to include the display strings for outriggers for a specific language, and in a follow on post I'll share more details as to how to localize your reports and report models.

You may be wondering what happened. This is because we made a change in SP1 to consistently handle outrigger values which removed the special handling we had for our out of the box enumerations in outriggers. If you're now wondering what outriggers are, read up on the types of tables in data warehouse in my last post in which I provided the service manager data warehouse schema.

Here's the screenshot of the report we need to fix, the rest of the post will explain how to fix it.

 Image may be NSFW.
Clik here to view.

Replace table binding in the Data Source view with Query binding

Rather than including references to the outriggers directly (in the screenshot below the outriggers are IncidentClassificationvw, IncidentSourcevw, IncidentUrgencyvw, and IncidentStatusvw) we'll replace these with named queries.

To do this, you simply right click the "table" and select Replace Table > With New Named Query.

 Image may be NSFW.
Clik here to view.

You then paste in your query which joins to DisplayStringDimvw and filter on the language of your choice. Repeat for each outrigger.

SELECT outrigger.IncidentClassificationId, Strings.DisplayName AS Classification

FROM IncidentClassificationvw AS outrigger INNER JOIN

DisplayStringDimvw AS Strings ON outrigger.EnumTypeId = Strings.BaseManagedEntityId

WHERE (Strings.LanguageCode = 'ENU')

 Image may be NSFW.
Clik here to view.

Create & publish your report model

To create a simple report model, right click the Report Models node in the Solution Explorer (right pane) and select Add New Report Model. Follow the wizard, selecting the default options.

 Image may be NSFW.
Clik here to view.

If you want to clean it up a little, double click the Report Model, then select IncidentDim on the left.

Scroll down the properties in the center and you'll notice there is now a Role added to the IncidentDim named Classification Incident Classification, along with an Attribute named Classification. This is because using outriggers to describe dimensions is an industry standard approach and SQL BI Dev Studio understands that these outriggers should essentially get added as properties directly to the Incident dimension for the easiest end user report authoring experience.

The attribute is populated directly by the column I mentioned you should not use in reports, so you should select and delete that attribute from your model. You may also rename the Role "Classification Incident Classification" to a more user-friendly name like "Incident Classification" if you'd like to.

 Image may be NSFW.
Clik here to view.

Now save, right click your report model and click Deploy.

Create a report to try out your new report model

Open up SQL Reporting Services Report Builder (below screenshots are using Report Builder 3.0). If you haven't gotten a chance to check it out yet, here's a good jump start guide.

 Image may be NSFW.
Clik here to view.

Follow the wizard, select your newly published report model:

 Image may be NSFW.
Clik here to view.

Drag & drop your Incident Classification and Incidents measure. Hit the red ! to preview.

 Image may be NSFW.
Clik here to view.

Drag & drop to layout the report

 Image may be NSFW.
Clik here to view.

Continue with the wizard, selecting the formatting options of your choice. If you would like, you can then resize the columns, add images and more. For our quick and simple example, though, I'm going to intentionally leave formatting reports for another post. If you've been following along, your report should now look like this:

 Image may be NSFW.
Clik here to view.

Go ahead and publish to the SSRS server under the /SystemCenter/ServiceManager/ folder of your choice to make the report show up in the console.

 Image may be NSFW.
Clik here to view.

 

Image may be NSFW.
Clik here to view.

How long does the Service Manager Data Warehouse retain historical data?

The short answer is that we keep data in the warehouse for 3 years for fact tables and forever for dimension and outrigger tables. Antoni Hanus, a Premier Field Engineer with Microsoft, has put together the detailed steps on how to adjust this retention period so you can retain data longer or groom it out more aggressively.

DISCLAIMER: Microsoft does not support direct querying or manipulation of the SQL Databases. 

To learn more about the different type of tables in the data warehouse, see the blog post which describes the data warehouse schema.

To determine which are the fact tables and which are the dimension tables you can run the appropriate query against your DWDataMart database

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Fact'

 

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Dimension'

NOTE: Microsoft does not support directly accessing nor managing the tables (dimensions, facts nor outriggers).

Instead, please use the views as defined by the ‘ViewName’ column in the above query.

Fact Table Retention Settings

There are 2 two types of retention setting in the data warehouse:

1) Global - The global retention period (set to 3 years by default) which any subsequently created fact tables use as their default retention setting.

2) Individual Fact– The granular retention period for each individual fact table (uses the global setting of 3 years, unless individually modified).

Global:

The default global retention period for data stored in the Service Manager Data Warehouse is 3 years so all OOB (Out of the box) Fact tables use 3 years as the default retention setting.

Any subsequently created fact tables will use this setting upon creation for their individual retention setting.

The default Global setting value is 1576800, which is 3 years (1576800 = 1440 minutes per day * 365 days * 3 years)

This value can be verified by running the following SQL Query against the DWDataMart database:

select ConfiguredValue from etl.Configuration where ConfigurationFilter = 'DWMaintenance.grooming'

Individual Fact Tables:

Individual fact tables will inherit the global retention value upon creation, or can be customized to a value that is different from the default global setting. 

OOB Individual Fact tables that were created upon installation, can also be individually configured with a specific retention value as required. 

All of the Fact tables in the Database can be returned by running the following query against the DWDataMart Database:

SELECT WarehouseEntityName

      ,ViewName

      ,wet.WarehouseEntityTypeName

  FROM etl.WarehouseEntity (nolock) we

  JOIN      etl.WarehouseEntityType (nolock) wet on we.WarehouseEntityTypeId = wet.WarehouseEntityTypeId

  WHERE     wet.WarehouseEntityTypeName = 'Fact'

An example of an OOB fact table returned is  ActivityStatusDurationFact which has a warehouseentity ID of 81;

Image may be NSFW.
Clik here to view.
clip_image002

The corresponding retention setting for this Fact table is stored in the etl.warehouseentitygroominginfo table, so if we run the following query, the ‘RetentionPeriodInMinutes’ field will show us the individual retention configured for that particular table
Query:

select warehouseEntityID, RetentionPeriodInMinutes from etl.WarehouseEntityGroomingInfo where WarehouseEntityId = 81

Result:

Image may be NSFW.
Clik here to view.
clip_image004

A SQL Statement such as the following could be used to update an individual fact table to an appropriate value:

Use DWDatamart

UPDATE etl.WarehouseEntityGroomingInfo

SET RetentionPeriodInMinutes = [number of minutes to retain data]

WHERE WarehouseEntityId = [WarehouseEntityID of Fact table to update]

Image may be NSFW.
Clik here to view.

KB: Service Manager 2010 Reports fail to include current data

Image may be NSFW.
Clik here to view.
image
Here’s a new Knowledge Base article we published this morning. This one discusses an issue where reporting data isn’t current due to a hung MPSyncJob or DWMaintenance job.

=====

Symptoms

In System Center Service Manager 2010, you may experience one or more of the following:

-Reports do not have current information
-ETL jobs are not running
-The MPSyncJob or DWMaintenance job is running for an extended period of time (e.g. a more than a few hours)

Cause

This can occur when either the MPSyncJob or DWMaintenance job is hung or stuck.

NOTEWhen the MPSyncJob or DWMaintenance job is running, it disables the ETL job. Because of this, no ETL jobs can run until MPSyncJob or DWMaintenance job finishes running. When either job is hung, no data will move to the DWDatamart for reporting.

NOTE The MPSyncJob can take up to 6 hours for intial deployment. However after initial deployment the MPsync job should complete quickly, usually in a few minutes. If the MPSyncJob is taking several hours or longer that there is a problem that needs to be addressed. This expected time also applies to the DWMaitnenance job - more information on resolving those issues can be found at http://technet.microsoft.com/en-us/library/hh542403.aspx

Resolution

Before troubleshooting, you will first need to understand what a Batch, Module, and Workitem are.

Batch:
Data Warehouse will create the batch for each process when it wants to run the extract job. For example:

Select * FROM Infra.Process Where ProcessName like '%Extract%'

Data Warehouse will create the batch for this process

Select * FROM Infra.Batch Where ProcessId = <ProcessID from 1st query>

Module: Module is defined to get the data from classes. Multiple modules are bound together to run in batches. One module runs one Workitem. To find out the modules process that the 1st query is running, use the query below:

Select * FROM Infra.ProcessModule Where ProcessId = <ProcessID from 1st query>

Workitem: Workitem is the smallest execution unit. Once all workitems are finished then the batch will be considered as finished

Select * FROM Infra.WorkItem Where BatchId = <BatchID from 2nd query>

Compare this query result with the result of 3rd query result (number of rows returned by query) will be same as every module runs in separate Workitem.

Troubleshooting Steps:

MPSyncJob hung:

When we see that the MPSyncJob hung, we first need to figure out why it hung. You can do that using the steps below:

1. Get the MPSyncJob batchid using the PowerShell command Get-SCDWJob –JobName MPSyncJob.
2. Issue the query to DWStagingAndConfig using the batchid from step #1.

select * from infra.WorkItem where BatchId = '...'

3. If no workitem is in status 7 (i.e. all of the workitems are either 6 or 3), and the job has been running for a while, then we need to restart the “System Center Management” service.

NOTE Find all the available status by using the query below.

Select * FROM Infra.Status

4. Otherwise, check the CustomInfo column of workitems whose statusid = 7

If it says waiting for acquiring lock, that means currently the DWMaintenance job is running. Wait until that job is complete, then the focus on why the DWMaintenance job is hung. If it says waiting for deployment to complete, then we need to run the following:

select * from DeploySequenceView where DeploymentStatusId != 6

If the query returns something where the status is waiting or not started or running, that means the deployment cannot complete. Next we need to check if deployment and execution processcategory are enabled using he following query:

Select * from infra.ProcessCategory

From here, we want to make sure the Deployment and Execution process categories are enabled. If they are enabled, from this point on the investigation should focus on MP deployment. Change the underlying table using this query

select * from DeploySequenceView where DeploymentStatusId != 6

This only returns those with a failed status, or if there are MPs that depend on the failed MP, their status should be Waiting. From the view DeploySequenceView definition in DWStaingAndConfig, we can see deploymentstatusId is calculated via the following tables:

dbo.DeploySequence,dbo.DeploySequenceStaging,and Infra.WorkItem

The work around for this is to modify the deploysequencestaging table using the commands below.

NOTE Please complete a backup of the DWStagingAndConfig database before running the query below.

use DWStagingAndConfig
update DeploySequenceStaging
set StatusId = 2
where StatusId = 4

The work around above is just to allow the MPSyncJob job to move on; it doesn’t resolve the MP failure to deploy issue.

DWMaintenance job hung

In general, when we see the DWMaintenance job is hung, we would troubleshoot this much the same way we would a hung MPSyncJob:

Get the DWMaintenance batchid by using the PowerShell command Get-SCDWJob –JobName DWMaintenance

Issue the following query to DWStagingAndConfig using the batchid from first step:

select * from infra.WorkItem where BatchId = '...'

From here, continue on just as you would for a hung MPSyncJob.

More Information

Prevention:
Monitor the Jobs running status weekly using the PowerShell command Get-SCDWJob. Use PowerShell instead of the UI because from the PowerShell output you can see how long the job has been running.

=====

For the most current version of this article please see the following:

2703027 - Service Manager 2010 Reports fail to include current data

J.C. Hornbeck| System Center & Security Knowledge Engineer

Get the latest System Center news onFacebookandTwitter:

Image may be NSFW.
Clik here to view.
clip_image001
Image may be NSFW.
Clik here to view.
clip_image002

App-V Team blog: http://blogs.technet.com/appv/
ConfigMgr Support Team blog: http://blogs.technet.com/configurationmgr/
DPM Team blog: http://blogs.technet.com/dpm/
MED-V Team blog: http://blogs.technet.com/medv/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
SCVMM Team blog: http://blogs.technet.com/scvmm
Server App-V Team blog: http://blogs.technet.com/b/serverappv
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
System Center Essentials Team blog: http://blogs.technet.com/b/systemcenteressentials
WSUS Support Team blog: http://blogs.technet.com/sus/

The Forefront Server Protection blog: http://blogs.technet.com/b/fss/
The Forefront Endpoint Security blog : http://blogs.technet.com/b/clientsecurity/
The Forefront Identity Manager blog : http://blogs.msdn.com/b/ms-identity-support/
The Forefront TMG blog: http://blogs.technet.com/b/isablog/
The Forefront UAG blog: http://blogs.technet.com/b/edgeaccessblog/

Image may be NSFW.
Clik here to view.

Update 3 is available for the System Center 2012 R2 Service Manager Self-Service Portal

We are pleased to announce third cumulative update for new HTML5 Self Service Portal and it can be downloaded from here.  As this is cumulative update, you can either install this update directly on RTM Release or even if you have update1  or update2 installed on RTM.

Please note that this release is independent of Service Manager UR9 Release and do not need UR9 to be installed on machines. This specific patch needs to be applied only on machine(s) which are hosting new Self Service Portal to enable following features and fixes.

Here are new features introduced with this release

  • Attachments a can be viewed and downloaded for Self Service Portal.
  • The Must Vote and Has Veto information is added for reviewers in Review activities.
  • By default, the portal puts custom enumerations for My Request (incident & Service Requests) states in the Closed filter category. Now the portal allows for customization to map required custom states to the Active filter category also. For more details check for “CustomActiveRequestStatusEnumList” under “Basic Customization” section on this link.

Here are issues fixed in this release

  • Multiple selections across pages in the Query UI do not work.
  • Enums in list don’t appear in the same order as they are shown in the console.
  • Cannot scroll to last item in Internet Explorer 10.
  • Selections of optional query element in a request offering is not being mapped to work item fields.
  • The Resolved Date property for an incident is not being set while resolving an incident from the Self Service Portal.
  • When you author the request offering, custom properties cannot be mapped to an activity which is part of another activity.
  • The portal is not setting Actual End Date or Decision Date values for manual and review activities.
  • The portal displays an incorrect time when the server uses the 12 Hr (AM/PM) time format.
  • Request offerings and Service offerings are unsorted (now are shown alphabetically).
  • Page load fails with a JavaScript error when you click the Share icon for an item which has a single quotation mark (‘) in the title.
  • Marking a manual activity as failed takes it to the Completed state.
  • The Date Picker in a request offering form keeps the date only in U.S. format.
  • A request offering form crashes if it contains an empty Simple list form element.
  • Activities are missing in MyActivites for “All” filter in Turkish Language
  • Display-Only Query Results are behaving as Mandatory Field.

 

Post Installation Manual Update (Optional)

If you have changed any of the .cshtml file(s) after last update, then .cshtml file(s) will not getting updated by patch. This happens because update installer finds that last modified date of .cshtml file is different than what was installed last time (RTM/ Update 1/ Update 2), it skips updating this file while installing Update 3. To make update work properly in this case either

1) for all .cshtml files either revert back these files to last installed version  (RTM/ Update 1/ Update 2) or

2) install portal on new machine (do not need to connect to any sdk server) and then patch it with Update 3. This way you will be able to get latest files. You can directly copy new file to your deployment.

KB: Data Warehouse jobs fail and event ID 33502 is logged in Microsoft System Center 2012 Service Manager

We recently published a new Knowledge Base article that discusses an issue where Data Warehouse jobs fail in SCSM 2012. When this problem occurs the following event is logged in the Operations Manager event log on the Data Warehouse server:

Log Name: Operations Manager
Source: Data Warehouse
Event ID: 33502
Level: Error
Description:
ETL Module Execution failed:
ETL process type: Transform
Batch ID: ######
Module name: TransformEntityRelatesToEntityFact
Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Also, when you run certain Data Warehouse related cmdlets you may also see a timeout error recorded for the TransformEntityRelatesToEntityFact module that resembles the following:

Get-SCDWJobModule -JobName transform.common
. . .
1952 TransformEntityRelatesToEntityFact Failed
. . .

For all the details regarding why this problem might occur and a couple options to resolve it, please see the following:

3137611Data Warehouse jobs fail and event ID 33502 is logged (https://support.microsoft.com/en-us/kb/3137611)


J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

KB: Event ID 33601 when you process an SLA workflow in Service Manager

When you are processing a Service Level Agreement (SLA) workflow in System Center 2012 R2 Service Manager (SCSM 2012 R2), the following error may be logged in the Operations Manager log:

Log Name: Operations Manager
Source: SMCMDB Subscription Data Source Module
Event ID: 33601
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: ComputerName
Description:
The database subscription configuration is not valid.
The following errors were encountered:
Exception message: Subscription configuration error. Error reading TargetType element. Error message: Invalid TargetType attribute specified. Target type id 9bc85fd0-934c-bfdb-9643-63779a0f3742 must be either abstract or first non abstract base type.
One or more subscriptions were affected by this.
Subscription name: WorkflowSubscription_9d183789_7944_49f2_b5fe_2d8f77ad6ddc
Instance name: SLA Workflow Target: DisplayName
Instance ID: {69CBC824-AA85-B123-58C3-A46F97E54BF7}
Management group: ManagementGroup

This can occur when the Service Level Objective (SLO) has been configured to use a derived class. For example, assume that you create a new class that is based on the Service Request class and that it is named SRNewClass. When you create a Service Level Objective and you select SRNewClass from the “Class” section on the General tab in the wizard, event ID 33601 is returned during the workflow process.

For complete details as well as a work around, see the following:

3171966Event ID 33601 when you process an SLA workflow in Service Manager (https://support.microsoft.com/en-us/kb/3171966)

 

J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This can occur when the Service Level Objective (SLO) has been configured to use a derived class. For example, assume that you create a new class that’s based on the Service Request class and that’s named SRNewClass. When you create a Service Level Objective, and then you select SRNewClass from the “Class” section on the General tab in the wizard, event ID 33601 is returned during the workflow process.

KB: Data to gather when opening a case for Microsoft Azure Automation

A new Knowledge Base article has been published that describes some of the basic information you should gather before opening a case for Azure Automation with Microsoft product support. This information is not required, however it will help Microsoft resolve your problem as quickly as possible. You can find the complete article below.

3178510Data to gather when opening a case for Microsoft Azure Automation (https://support.microsoft.com/en-us/kb/3178510)


J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

SCSM 2016 – Upgrade steps for custom development

With the SCSM 2016 release, the product has moved to support .Net 4.5.1. The tool set to support this move to .Net 4.5.1, required to break few dependencies and has led to the movement of classes across the assemblies.
This may break the custom solutions made by 3rd party (non-Microsoft) after upgrade to SCSM 2016.

Your custom solution will be impacted if:

  • The custom solutions have target .NET Framework version lower than 4.5.1
  • Existing classes or controls used by custom solutions have been moved to different assembly
  • Custom solutions have “Specific Version” (7.1.1000.1) reference to SM assemblies

After upgrade to SCSM 2016, you might see the below popups on the SM console:

Image may be NSFW.
Clik here to view.
pop1
Image may be NSFW.
Clik here to view.
pop2
Image may be NSFW.
Clik here to view.
pop3

You can fix the problem with following steps:

  • Recompile the custom solutions with target .Net Framework 4.5.1
  • When you build your toolset with SM 2016, modify your solutions to include references to the appropriate SM assemblies. The provided excel sheet has detailed information about the affected classes.
  • Remove the “Version Specific” (1.1000.0) information while referencing the out of box SM assemblies in your custom solutions.

In SM2012R2 few assemblies have higher version (7.1.1000.0) from SM 2016 assemblies. In SM 2016 all assemblies have same version (7.0.5000.0)

Steps for upgrade to SCSM 2016

  1. In place upgrade of SM2012R2 to SM 2016
  2. Reimport or reinstall the upgraded custom solutions from Partners/MVPs

What next..
Our partners (CasedDimensions, Gridpro, Cireson, Provance) will be offering their updated solutions for Service Manager 2016

You can also refer the following blog from our MVP Kurt Van Hoecke for more related information
http://www.scug.nl/system-center/scsm-2016-steps-used-for-upgrading-custom-development/

Excel sheet which has detailed information about code migration (affected classes)
SCSMCodeMigration

 

 

 

 

 

 


Service Manager #LyncUp calls

We are currently heads down working towards the System Center 2016 launch. As such, on June 24th, we sent out a cancellation notice for the Service Manager #LyncUp calls, with the plan to resume it post System Center 2016 GA (General Availability). Meanwhile, it has been brought to our attention that on August 16th, a few customers dialed into Skype expecting the regular session. In this regard, we sincerely apologize for any inconvenience caused to any of our Customers, MVPs and Partners. We request you to go ahead and delete any existing invites on your calendar. Please continue to share your feedback/comments/queries via https://connect.microsoft.com/WindowsServer/Feedback.

Removing a Service Manager WorkItem using native PowerShell cmdlets and confirming what happens to the user relationship

Hi everyone, this is Austin Mack from the Microsoft Service Manager support team. Recently I was asked how to remove a Service Manager WorkItem (Incident, Service Request, Change Request, Release Record, Problem). I was also asked to verify that the relationships associated with the WorkItem did not get orphaned. Service manager relationships have two end points, and if you look at a SQL table such as ServiceManager.dbo.MT_System$WorkItem$Incident
you will notice that the user account for “Affected User” and “Assigned to user” are not stored in the incident table.  Service Manager creates a relationship to the user object which contains a lot of information instead of just recording a user string in the WorkItem. 

Below is what happens under the covers to the existing relationship when a WorkItem is deleted. This example uses an incident (IR78) that has the Affected user of  CONTOSO\GlenJohn148“. First we need to get the relationship ID for “Affected User” so we’ll run the following in PowerShell:

# Automatically import the Service Manager cmdlets
import-module (((Get-ItemProperty “HKLM:\SOFTWARE\Microsoft\System Center\2010\Service Manager\Setup”).InstallDirectory)+”Microsoft.EnterpriseManagement.Warehouse.Cmdlets.psd1″)
# Display relationship ID types
Get-SCRelationship | ?{$_.DisplayName -like “*User”| FT ID, DisplayName, Name -autosize

This shows the following:

ID
DisplayName Name
dec8d54a-6284-0f9d-cafb-8373f4dc865a  Has Working User System.WorkItem.BillableTimeHasWorkingUser
76bc6c3b-a77b-2468-0a63-169d23dfcdf0 Closed By User System.WorkItem.TroubleTicketClosedByUser
f7d9b385-a84d-3884-7cde-e2c926d931a5  Resolved By User  r System.WorkItem.TroubleTicketResolvedByUse
dff9be66-38b0-b6d6-6144-a412a3ebd4ce
Affected User System.WorkItemAffectedUser
15e577a3-6bf9-6713-4eac-ba5a5b7c4722  Assigned To User System.WorkItemAssignedToUser
ba8180d3-5bf9-1bbd-ae87-145dd8fc520f  Closed By User System.WorkItemClosedByUser
df738111-c7a2-b450-5872-c5f3b927481a  Created By User System.WorkItemCreatedByUser
f6205e94-82f9-9a97-3b4f-c7127afb43a8  Requested By User System.WorkItemRequestedByUser
40927c76-8993-7427-dd76-6245e8482ae7  Created By User System.PolicyItemCreatedByUser
ffd71f9e-7346-d12b-85d6-7c39f507b7bb  Added By User System.FileAttachmentAddedByUser
90da7d7c-948b-e16e-f39a-f6e3d1ffc921  Is User System.ReviewerIsUser
9441a6d1-1317-9520-de37-6c54512feeba  Voted By User System.ReviewerVotedByUser
aaf7adeb-920c-3d3f-2184-1de2a2cba5a0  Primary User System.ComputerPrimaryUser
cbb45424-b0a2-72f0-d535-541941cdf8e1  Owned By User System.ConfigItemOwnedByUser
dd01fc9b-20ce-ea03-3ec1-f52b3241b033  Serviced By User System.ConfigItemServicedByUser
fbd04ee6-9de3-cc91-b9c5-1807e303b1cc  Affects User System.ServiceImpactsUser
4a807c65-6a1f-15b2-bdf3-e967e58c254a
Manages User System.UserManagesUser

Next we’ll display the relationships for “CONTOSO\GlenJohn148″ that are of type “Affected User”:

## Display relationships in use by CONTOSO\GlenJohn148.  GlenJohn148 was a new account with only one IR as the affected user ##
$ADUserClass Get-SCClass -Name Microsoft.AD.User
$user=Get-SCClassInstance -Class $ADuserclass ?{$_.username -like “GlenJohn148”}
Get-SCRelationshipInstance -TargetInstance $user ?{$_.RelationshipId -eq “dff9be66-38b0-b6d6-6144-a412a3ebd4ce”}

Sample Output:

SourceObject:
IR78 – My Parent Incident request
TargetObject: CONTOSO\GlenJohn148
RelationshipId: dff9be66-38b0-b6d6-6144-a412a3ebd4ce
IsDeleted: FALSE
Values: {}
LastModified: 9/7/2016 19:31
IsNew: FALSE
HasChanges: FALSE
Id: ac3fd46d-a76f-8b74-6f42-9943e15a8a04
ManagementGroup: SM_mgmt_group_name
ManagementGroupId: 508488d4-edd9-3568-f10c-3a94e835587c

Because every relationship has two endpoints, let’s also display the other side of the relationship. An incident will have many relationships and one of them is the affected user (dff9be66-38b0-b6d6-6144-a412a3ebd4ce):

$WorkItemClassType Get-SCClass -Name System.WorkItem.Incident
$workitems Get-SCClassInstance -Class $WorkItemClassType;  
$workitem = $workitems | ?{$_.id -like “IR78”}
$workitems_RI=Get-SCRelationshipInstance -sourceInstance $workitem  foreach ($workitemRel in  $workitems_RI)
{
”  Name:  ” $workitemRel.TargetObject.Name + ”     RelationshipID:” $workitemRel.RelationshipId.ToString()
}

Sample Output:

Name:  MA79                                   
RelationshipID:2da498be-0485-b2b2-d520-6ebd1698e61b
Name:  CONTOSO.GlenJohn148        RelationshipID:dff9be66-38b0-b6d6-6144-a412a3ebd4ce
Name:  72e077d1-0386-414a-a525-b110fab4b67e    RelationshipID:a860c62e-e675-b121-f614-e52fcbd9ef2c

We can see the Affected User relationship present, and IR78 points to GlenJohn148. Using the previous PowerShell block with $Workitem defined to the single Incident, we can use PowerShell to remove (delete) incident IR78:

$workitem Remove-SCClassInstance

We can no longer get to the relationships for WorkItem IR78 because it is now gone. However, every relationship has two endpoints, so what happened to the relationship displayed by the user object? We can see the user
relationship that GlenJohn148 has by running the initial PowerShell script:

## Display relationships in use by CONTOSO\GlenJohn148.  GlenJohn148 was a new account with only one IR as the affected user ##
$ADUserClass Get-SCClass -Name Microsoft.AD.User
$user=Get-SCClassInstance -Class $ADuserclass ?{$_.username -like “GlenJohn148”}
Get-SCRelationshipInstance -TargetInstance $user ?{$_.RelationshipId -eq “dff9be66-38b0-b6d6-6144-a412a3ebd4ce”}
 
Sample Output:

SourceObject:
_____________________                                
TargetObject: CONTOSO\GlenJohn148
RelationshipId: dff9be66-38b0-b6d6-6144-a412a3ebd4ce
IsDeleted: TRUE
Values: {}
LastModified: 9/7/2016 19:50
IsNew: FALSE
HasChanges: FALSE
Id: ac3fd46d-a76f-8b74-6f42-9943e15a8a04
ManagementGroup: SM_mgmt_group_name
ManagementGroupId: 508488d4-edd9-3568-f10c-3a94e835587c

You will notice that the SourceObject that previously pointed to IR78 is now blank and the IsDeleted flag for the relationship is set to True and ready for Service Manager to groom the
relationship. Grooming occurs are regular intervals. Some items are not groomed for 2 days (at 2:00am) to allow the IsDeleted flag which is set to True to migrate to the Data Warehouse.

If you want to work with one of the other WorkItem classes, you can update the line below in your PowerShell command with the corresponding class:

$WorkItemClassType Get-SCClass -Name System.WorkItem.Incident

Below is the list of WorkItem classes that correspond to Release Record, Problem, Change Request, Service Request and Incident:

System.WorkItem.ReleaseRecord
System.WorkItem.Problem
System.WorkItem.ChangeRequest
System.WorkItem.ServiceRequest
System.WorkItem.Incident

Austin Mack | Senior Support Escalation Engineer

Monitoring Service Manager with Microsoft System Center Operations Manager

Microsoft System Center Operations Manager, along with the Service Manager management packs, provides an excellent monitoring platform for Service Manager. Many important pieces that are vital to Service Manager health can be monitored, including:

  • Checking SQL Server to be sure the Broker Service is enabled
  • Checking the Service Manager Grooming History to be sure cleanup of the CMDB is taking place
  • Checking workflows for problems
  • Checking MPS sync for failures

…and much more. The full list of rules and monitors can be found in the Service Manager Management Pack Guide that you can download along with the Management Packs, and I would encourage you to read that guide thoroughly before deploying the Management Packs and bringing your Service Manager monitoring online.

One key note in the guide is where it says “This management pack requires agentless monitoring…” It’s been a while since the guide was released, and that, along with the fact that the SP1 release for System Center 2012 introduced a Control Panel applet for the Microsoft Monitoring Agent (HealthService), has contributed to some confusion over Agent versus Agentless monitoring. This post addresses one of the possible drawbacks of using Agent-based monitoring and then receiving the following events in your Service Manager management server’s Operations Manager Event log:

Event ID 6024
LaunchRestartHealthService.js : Launching Restart Health Service. Health Service exceeded Process\Handle Count or Private Bytes threshhold.

Event ID 6060
RestartHealthService.js : Restarting Health Service. Error: Failed to Terminate Health Service

Event ID 6062
RestartHealthService.js : Restarting Health Service. Service successfully restarted.

For more information on this, see https://blogs.technet.microsoft.com/omx/2013/10/17/health-service-restarts-on-service-manager-servers-with-scom-agents/.

In Service Manager, the Microsoft Monitoring Agent’s primary role is managing all the Service Manager workflows, including workflow subscriptions, group calculations, connectors, configuration, etc. Adding the Service Manger HealthService as an Operations Manager agent can result in a negative impact on the Service Manger workflows when the Operations Manager workflows run on their configured schedules.
So how do you find out whether you’re agent managed or not, and how do you switch?

First, check in Control Panel on your Service Manager Workflow Management server. If you have anything listed under Management Groups you’re probably Agent managed:

Image may be NSFW.
Clik here to view.
image

You can also check in Operations Manager. Open the console and navigate to Administration –> Device Management –> Agent Managed and search for your Service Manager Workflow Management server. If it’s in there, you’re definitely Agent managed.

The good news is that it’s easy to switch. First, clear everything from Control Panel on your Service Manager Workflow Management server, including the Automatically update management group assignments from AD DS check box. Next, select the Service Manager Workflow Management server from Operations Manager and click Delete from the task pane.

Give everything a few minutes to settle in, and then from Operations Manager, right-click Device Management and open the Discovery Wizard:

Image may be NSFW.
Clik here to view.
image

At this point I’m going to assume you’re familiar with Operations Manager and skip to the important part; the Select Objects to Manage page. Select your Service Manager Workflow Management server from here, then at the bottom of the wizard dialog, choose Agentless from the Management Mode: dropdown list:

Image may be NSFW.
Clik here to view.
image

After you choose Agentless, you’ll have the option to choose your Proxy Agent:

Image may be NSFW.
Clik here to view.
image

The proxy agent should be an Operations Manager Management Server or another computer in your environment (not one running Service Manager!) that is running the Microsoft Monitoring Agent and reports to the desired Operations Manager management group. The MP deployment guide has more details about what is required of your proxy agent.

One more thing before closing: The MP deployment guide section on the Service Manager Database Account profile in Operations Manager has an omission. One of the rules uses a function within SQL which requires Execute permissions in SQL. Instead of choosing the db_datareader role in SQL, you can choose something in SQL that includes Execute permissions, or much better, just give your data reader account permission to execute that single object in SQL.

Start with the Database User you created. Select the user and then Properties:

Image may be NSFW.
Clik here to view.
image

Select the Securables page and then click Search. You will get the dialog below. Select Specific Objects, then click OK.

Image may be NSFW.
Clik here to view.
image

From Select Objects, click Object Types, and then click OK, then select Scalar functions, and click OK.

Image may be NSFW.
Clik here to view.
image

Click, Browse and locate the dbo.fn_GetEntityChangeLogGroomingWatermark function, select it, and click OK.
Note that after you click OK, you have the option to check names. You can use this if you wish to be sure you have the correct name. When done, click OK again.

Image may be NSFW.
Clik here to view.
image

Give the user Grant->Execute permissions on the object, then click OK and you’re done.

You can confirm that you set the desired permissions by opening SQL Management Studio as the user you created for DB Reader. Expand the Functions Folder under the Database, and then expand Scalar Functions. You should only see the one for which you granted permissions. You can also test by running this query using the user account for which you granted permissions.

SELECT dbo.fn_GetEntityChangeLogGroomingWatermark() AS HighWaterMark

Now, you’re agentless!

REMINDER: Be sure you go through the MP deployment guide to get the correct security configurations for all the pieces of this monitoring solution.

The good news is that there’s a solution for monitoring your Service Manager environment, and you probably already have it: Microsoft System Center Operations Manager. Go forth and monitor, and go agentless, this will keep you informed of many known issues in Service Manager and help keep your HealthService healthy.

Scott Walker, Senior Support Escalation Engineer
Microsoft Enterprise Cloud Group

Service Manager now available in System Center 2016

Hi everyone! We are delighted to announce the availability of System Center 2016 Service Manager. This new release of Service Manager contains a wide array of additions & improvements- its faster, provides better usability, and comes with many new features.

When we talk about performance, this release of Service Manager  is a big leap from its predecessor. This release offers enhanced performance across all areas of the product, including Work Item creation, Workflow runtime, and Connector sync time.

Here is a quick glimpse of what we have found in our testing (in comparison with Service Manager in System Center 2012 R2):

Image may be NSFW.
Clik here to view.
sm 2016 blog 1

You can read about details of these comparisons here.

In addition to the significant performance enhancements, there are also several new features to enhance your experience with System Center 2016 Service Manager:

  • Data Warehouse cubes now contain new date dimensions which can help in creating rich reports and slice data based on Year, Quarter, Month, Day etc.
  • New HTML based Self Service Portal, which offers many new features with easy to navigate modern UI and multi browser support.
    Image may be NSFW.
    Clik here to view.
    sm 2016 blog 2
  • Support for .Net framework 4.5.1, which expands the possibilities for developing solutions on Service Manager platform.
  • Console forms now contain out-of-the-box spell check support, to make life of the help desk analysts a little easier.
  • A new console task – Open Activity in Progress, makes it easier to analyse the currently in progress activities without needing to dig for them inside the Service Request or Change Request forms.
  • Service Manager now supports integration with Lync 2013 and Skype for Business in Microsoft Office suite 2013 and 2016, which helps to contact a user from an Incident form.

All these new features and other performance and usability enhancements are now available in System Center 2016 Service Manager. You can read more about them in detail from in the What’s New in Service Manager documentation. Also, feel free to use the comment section to share your feedback and suggestions. We are looking forward to hearing your thoughts.

Get started now!  Download System Center 2016.

Extending the support of platforms for SCSM 2016

Hi everyone! Thanks for sharing your feedback and experiences with the Service Manager 2016 deployment. We heard you, and to make your deployment and upgrade experience better, we are now extending the support of following platforms for Service Manager 2016 –

  • Support of SQL server 2014 SP2 for both SM 2016 & 2012 R2
    As some of you were waiting to upgrade your SQL servers, Service Manager 2016 and Service Manager 2012 R2 (with Update Rollup 9) now officially support Service Pack 2 for SQL Server 2014 for hosting your Service Manager CMDB and Data Warehouse databases.
  • Support of SM 2016 console on Windows 7
    Service Manager 2016 console installation will now be supported on Microsoft Windows 7. This will require installation of .NET 4.5.1 as a pre-requisite for Windows 7 to support the SM 2016 console. You can download it from here.

    Please note that the new spell check feature which was introduced in the Service Manager 2016 console, will have limited language support for Windows 7 installations. The supported languages on Windows 7 include English, French, German and Spanish.

  • Support of SM 2016 connectors with System Center 2012 R2 components
    We heard your feedback that the seamless and easier upgrade to Service Manager 2016 requires keeping the support of SM connectors with System Center 2012 R2 components. Hence, we will be supporting System Center 2012 R2 Virtual Machine manager, Orchestrator, Operations Manager and Configuration Manager (including SCCM 1511, 1602 and 1606) to be used with Service Manager 2016 connectors.

We have done a fair amount of validation to make sure that everything continues to work as expected. That said, if there is anything which seems suspicious let us know via your comments below.

Viewing all 26 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>