Skip to main content

Update_ComplianceStatus report inconsistencies on your CAS?

Do you have a CM hierarchy (CAS and Primary site(s)) deployed, and have inconsistencies with update compliance status reporting?   Normally, triggering a "RefreshServerComplanceState" on the troubled endpoints does the trick. (Which by the way, click here if you'd like to see my colleague Sherry Kissinger's awesome blog about making this automated in your environment by making it a CI/Baseline, which has worked great for us.)

This would “refresh” and send the actual patch compliance state of the troubled clients up the chain and fix the issue. But what if this doesn’t work, or completely resetting the machine policy doesn’t cut it, and your patch reporting is still having discrepancies? Which what we've run into in our environment just here recently. But how does this happen? When dealing with a hierarchy in a large environment, you’re bound to have network hiccups, storage issues, outages, DRS issues, data corruption, and etc… Which all could result into data and/or state inconsistencies. Nonetheless, here’s how we addressed the issue.

Determining the issue:

When querying the CAS database for the "required" patches that need to be applied on the client in question (Status 2 means Required/missing), using SQL query below…  

DECLARE @Resourceid int = (select resourceid from v_r_system_valid s1 where s1.netbios_name0 = '<ENTER CLIENT HOSTNAME HERE>')
If @ResourceID IS NOT NULL
BEGIN
-- get update info
DECLARE @Updates TABLE (CI_ID INT, BulletinID VARCHAR(64), ArticleID VARCHAR(64), Title VARCHAR(512), Description VARCHAR(3000), InfoURL VARCHAR(512), Severity INT,
  IsSuperseded INT, IsExpired INT, DateLastModified DATETIME);
INSERT INTO @Updates
SELECT  CI_ID, BulletinID, ArticleID, Title, Description, InfoURL, Severity,
  IsSuperseded, IsExpired, DateLastModified
FROM
  dbo.v_UpdateInfo ui
  Where Severity is not null
SELECT  upd.BulletinID, upd.ArticleID, ucs.Status, ucs.LastStatusChangeTime, ucs.LastStatusCheckTime,
  CASE WHEN ucs.Status IN(1,3) THEN 'GreenCheck' ELSE 'RedX' END AS StatusImage,
  case
  when upd.Severity = 0 then 'None Declared'
  when upd.Severity=2 then 'Low'
  when upd.Severity=6 then 'Moderate'
  when upd.Severity=8 then 'Important'
  when upd.Severity=10 then 'Critical'
  else cast(upd.Severity as nvarchar) end as 'Severity', upd.IsSuperseded, upd.Title,
  upd.Description,
  upd.InfoURL,
  upd.DateLastModified [Last Modified Date by Microsoft]
FROM
  @Updates upd
  JOIN dbo.v_Update_ComplianceStatusAll ucs
    ON upd.CI_ID = ucs.CI_ID
       AND ucs.ResourceID = @ResourceID
       AND ucs.Status =2  --(only required)
       AND upd.IsExpired = 0
ORDER BY
  upd.severity desc, upd.IsSuperseded, upd.DateLastModified desc, ArticleID desc
END

We would get about over 60 various patches including both Office 2013 and Office 2016 patches.  Um...  Hmm, the workstation doesn't even have Office 2013 installed!  Hmmm...

But then, when checking the client itself for missing patches, using the POSH cmd line below:

get-wmiobject -computername <HOSTNAME HERE> -query "SELECT * FROM CCM_UpdateStatus" -namespace "root\ccm\SoftwareUpdates\UpdatesStore" | Where-object { $_.Status -eq "Missing" } | select-object Title, Status

We'd only get about 19 patches that are required/missing.

How do we fix this?

As mentioned earlier, normally triggering the RefreshServerComplianceState at the device level, fixes the issue.

invoke-command -ComputerName "<HOSTNAME HERE>" -Scriptblock {$SCCMUpdatesStore = New-Object -ComObject Microsoft.CCM.UpdatesStore; $SCCMUpdatesStore.RefreshServerComplianceState()}

But this doesn’t appear to have worked in our case… We were still having inconsistencies even after executing this option. Thus, after looking at the clients extensively, it turns out that the clients are clean, healthy, scanning properly, and reporting inventory up the chain without any issues. We then checked the Primary sites’ databases by using the SQL same query above to check our clients for their "Required" updates. And the results match exactly what the troubled clients have, for missing and required patches.  It appears that the CAS has more stale patch articles' state than the Primary site databases do, and there are definitely discrepancies between the two.  

So how do we fix this?   We opened up a case with Microsoft support, and they recommended to run the following against the CAS db in SSMS, which is an eyeopener for me... I had no idea we could initialize a synchronization JUST on a single article and not the entire replication group! NOTE: This would scare you, but it's really not.  I really thought it would trigger a replication for the entire General_Site_Data group, but no...  Just for the specific article we're having issues with.  And in fact, this processed REALLY fast for us.   Nonetheless, below is the magic wand :)  (Drum Roll....)  

EXEC spDrsSendSubscriptionInvalid '<CAS site Code>', '<Primary_Site_Code>', 'General_Site_Data', 'Update_ComplianceStatus'

This triggers a replication request from the CAS only for the "Update_ComplianceStatus" article from the target Primary site server.  Under the CAS’s rcmctrl.log, here's what you would start to see:

 

  At the Primary site level, it accepts the request from the CAS and processes it for Update_ComplianceStatus table (only), and it creates the cab file of data for it to be sent to the CAS. Below is shown in rcmctrl.log, along with the # of columns to be processed.

Once the Cab file is done being created in a cab format, it then sends it to the CAS for processing.   Monitor the sender.log at the Primary site server, if you’d like to see the progress.
Back at the CAS, once it receives the cab, it then processes it by removing the old data for Update_ComplianceStatus and replace it with new, along with the # of rows/records to be processed.

 During this stage, the CAS DRS status switches to 'Maintenance Mode" status, and a couple of replication groups may be degraded during this. To check the DRS status by running "exec spDiagDRS" in SSMS against the CAS db.

 

When the CAS finishes the processing of the bcp, the status is recoreded in rcmctrl.log as shown below.

 

 

Run "exec spDiagDRS" once again to check the overall DRS status of the hierarchy. The CAS should be off of the “Maintenance Mode” status at this point.
Now, try running the SQL query we use above to check for the client(s) patch status once again. Voila! The discrepancies that were there before should now be gone. Big thanks to our MS support folks for helping us resolve this issue!

DRS, SQL, Patch Reporting, Patch Compliance

  • Created on .