Skip to main content

CM Inventory per-user installed applications

| Sherry Kissinger | Sherry Kissinger

There are several examples of 'how to inventory' per-user installed applications and version, like Teams, or OneDrive, etc.  This is another one, from Benjamin Reynolds, and I've tested it a few times to make sure it worked like I thought it would.

Overall, the steps are..
1) deploy the CI inside a Baseline
2) Import the .mof and enable inventory.

That's the simple and short explanation.  For the nitty-gritty details and background story.  Per-user apps are recorded in the user context, i.e., in the hkey_current_user registry keys which are notoriously difficult to inventory.  Yes, there are methods to mount the hives, read inside, record, and inventory.  But then to me, that means you might be opening up and reading a user profile that hasn't been used in months or years--how relevant is it to know that Bob Smith, who left the company 2 years ago, still has an "old" version of Teams associated with the profile that he can't possibly be using for 2 years?  

What this routine does is multi-layered, and solves some (but not all) of the various issues I've felt "could" be encountered with inventorying per-user information.

First, a script inside the CI, running as system (not the logged in users) creates (if it doesn't already exist), a custom WMI Namespace called "CustomCMClasses".  If you so choose, you can change that if you like.  I've seen other examples using "ITLocal" as the custom namespace.  But for purposes of this blog, we'll assume you won't be modifying the name, you will leave it as 'CustomCMClasses'.  Then, it uses the well-known SIDs for "Everyone" and "Authenticated Users", to open up that namespace to allow those types of logins (aka, everyone and authenticated users) to write entries to classes in that namespace, like, for example... the per-user installed applications.

Second, a script inside the CI runs under user context.  It will first delete any records ALREADY in that class for that specific user, and then repopulate the class with anything found in the per-user uninstall information.  What's nice about that is that if this is a multi-user device, you will continue to get information for all of the users who log in.  

POTENTIAL drawback is that let's surmise that Bob Smith logged on in January, and entries were created for him.  Since then, he has not used this box or has even left the company.  There might be stale entries for Bob being inventoried... potentially for the life of the device.  That means that you will want to create reports where you filter on the 'ScriptRunTime' within the last xx days, so you don't pull stale data into reports.

--> Here <-- is the .zip containing the Configuration Item .cab to be imported into your CM Console (rename it before importing).  If you successfully import the CAB file, you don't have to do anything with the .renameAsPS1 files in the Zip.  Those .RenameAsPS1 files would be *IF* the .cab import fails, you could create your own CI, and add each of those as a Rule in the CI; one where you leave it to run as system, and the other where you carefully check the box for 'run scripts by using the logged on user credentials'.  Also in the .zip is a ImportIntoDefaulClientHardwareInventory.RenameToMof file.  Presuming you didn't change the custom class from being called 'CustomCMClasses', you would rename that to just .MOF, and import that into your Console, Administration, Client Settings, Default Client Settings, Hardware Inventory.  

Once you have the CI and a baseline including the CI created, you deploy the Baseline potentially to a small collection of devices. On those few devices, interactively do policy refreshes, and run the Baseline (from the Control Panel applet).  Note that you MIGHT have to run the baseline twice--the first time to create the initial custom class and set permissions.  Once that is done, it'll skip over that next time.  Then run the baseline again.  Then, using your favorite WMI browser (wmiexplorer?) look at customcmclasses namespace, and the class inside.  See if it contains what you expect it to contain.  If so, hooray!

If you are happy with the results, import the .mof (you'll get a view likely called v_gs_userinstalledapps0... usually). Enable inventory for that.  Deploy the baseline to the rest of your environment where you want to get user-installed apps.  Note, I would NOT have the baseline run frequently.  Perhaps every 4 days?  or every 7 days?  This information isn't mission critical, imo; it's a nice-to-have; for those (hopefully few) times when manager-types want to know what versions of Teams is out there (for example).

Sample report to get you started (once you have deployed the CI as a Baseline, tested it, and inventory is enabled):

DECLARE @60DaysAgo datetime = (GetDate()-60)
--This is so that if there are stale values from people who have left the company or are no longer using this machine, we don't see them in the reports
Select s1.netbios_name0 as 'ComputerName'
,uapps.Publisher0 as 'Publisher'
,uapps.DisplayName0 as 'DisplayName'
,uapps.Version0 as 'Version'
,uapps.InstallDate0 as 'Application Install Date If Known'
,uapps.user0 as 'Username associated with this Install'
,uapps.InstallLocation0 as 'Install Location If Known'
,uapps.UninstallString0 as 'UninstallString'
,uapps.ScriptRunTimeUTC0 as 'Date information was gathered'
from v_gs_userinstalledapps0 uapps
join v_r_system s1 on s1.resourceid=uapps.resourceid
where uapps.ScriptRunTimeUTC0 > CAST(@60DaysAgo as DATE)
order by s1.netbios_name0, uapps.publisher0, uapps.DisplayName0

 

File Inventory via Hardware Inventory

| Sherry Kissinger | Sherry Kissinger

If you are less than pleased with how file inventory functions (badly named 'Software Inventory' in CM), like me, this is possibly something you'd want to test in your lab (you have a lab, right) and see if for those occasional requests we all seem to get for "can't you just inventory a file...", that this script + mof edit would work for you.

If you have file inventory on at all (software inventory), for things like *.exe in ProgramFiles, leave it on for that (don't change everything).

But let's say you got a request to "find all .pst files stored anywhere on local drives", because your company is still battling cleaning up email archives people saved as local .pst files.  Sure, you can add that to file inventory, but as you know file inventory takes hours and HOURS to run on computers.

This script, in testing, took between 16 and 90 seconds when I was testing.

After you have customized the script for YOUR weird one-off rule(s), and tested it interactively completely outside of CM, and are happy it works as you expect it to work to populate the WMI class with values, then..

Add the customized-by-you script as a powershell script inside a Configuration Item; where "what means compliant" is existential, any value returned.
Add the CI to a Baseline, deploy the baseline to your test collection.

After a target has run the baseline, you would go into your CM Console, Administration, Client Settings, Custom Client settings, hardware inventory, Add... and connect to that target, and in root\cimv2, add the 'cm_CustomFileInventory' class.  Monitor your CM server dataldr.log to confirm it made the views, and assuming you left the class enabled, after the target gets the new policy with the new instruction for inventory, trigger a hardware inventory.  Once inventory arrives at your CM database, look at the newly available view (probably called something like v_gs_cm_customfileinventory0), and there you go.

Only devices which have run the baseline will have something to say, so you can limit the targets reporting back with this custom file inventory by only deploying the Baseline to a specific collection.

 

 

<#
.SYNOPSIS
For specific files or types of files, Populate a Custom WMI with that information, for later inventory retrieval

 

.DESCRIPTION
Query for files, and populate WMI

 

.NOTES
 2023-03-17 Sherry Kissinger

 

CAUTION! CAUTION!  this is NOT meant to be a replacement for File Inventory completely. This routine will populate WMI, and depending upon
the rules YOU might make, you could inadvertently cause WMI Bloat (which can cause problems, and those problems might be difficult to
identify), and then hardware inventory mif size might be too big, resulting in Hardware Inventory failing to report at all. 
For example, do NOT query for c:\ , *.*... you are just asking for everything to blow up, and do so badly.

 

This routine was originally created as a response to "we need to know about any/every pst file on the C: drive".  As you know, file inventory,
searching for *.zzz files on all of the C: drive can take 30+ minutes, even if you have an SSD, and relatively few files. 
This script, when tested interactively on test devices (ok, it was 2 whole machines in the lab...)
was taking anywhere from 16 to 90 seconds, depending upon the # of files on the drive, size of the drive, etc.

 

$VerbosePreference options are
'Continue' (she the messages)
'SilentlyContinue' (Do not show the messages, this should be the default)
'Stop' Show the message and halt (tuse for Debugging)
'Inquire' Prompt the user if ok to continue
 

 

Example lines for gathering files.  These lines, would, for example...
1) Find all *.pst files anywhere on the First known drive (this does not include things like OneDrive or redirected Documents folders however)
2) Find any *.exe which happen to be in first known drive (which is usually c:) \WierdAppInTheRoot and subfolders under WierdAppInTheRoot
3) Find fubar.xml, but only if it is specifically in c:\program files\Widgets (or program files x86\widgets), do not even look in subdirectories under that. (-Recurse has been removed from those lines)
 

 

$LocalFileSystemDrives = (Get-psdrive -PSProvider FileSystem)
[System.IO.FileSystemInfo[]]$files =  Get-ChildItem -Path ($LocalFileSystemDrives.Root)[0] -include ('*.pst') -Recurse -OutBuffer 1000 -ErrorAction SilentlyContinue
[System.IO.FileSystemInfo[]]$files += Get-ChildItem -Path ($LocalFileSystemDrives.Root)[0]'\WierdAppInTheRoot' -include ('*.exe') -Recurse -OutBuffer 1000 -ErrorAction SilentlyContinue
[System.IO.FileSystemInfo[]]$files += Get-ChildItem -Path $env:programfiles'\Widgets' -include ('fubar.xml') -OutBuffer 1000 -ErrorAction SilentlyContinue  | Where-Object {$_.DirectoryName -in ($env:programfiles'\Widgets')}
[System.IO.FileSystemInfo[]]$files += Get-ChildItem -Path ${env:ProgramFiles(x86)}'\Widgets' -include ('fubar.xml') -OutBuffer 1000 -ErrorAction SilentlyContinue  | Where-Object {$_.DirectoryName -in (${env:ProgramFiles(x86)}'\Widgets')}

 

Once you have all of the objects you want, then the next section in the script will populate the class with the values, by reading relevant
information from the $files object.

 

Once you have tested this interactively (NOT as a CI yet), and you are happy with the results you see interactively in wmiexplorer, then
create a CI with this script (modified for your purposes), and deploy to a test number of devices.  Add the custom WMI class
to your inventory, and monitor the results.

 

*if* your environment has more than just '1 drive', and you were tasked with "find pst files on ANY/all local drives... you can use
this sql query to see 'how many' of the if statements you might need to cover the max # of Drives your clients have:

 

;with cte as (select ld.resourceid, count(*) as 'count' from v_gs_logical_disk ld where ld.DriveType0=3 group by ResourceID)
Select max(cte.count) from cte

 

for example, in my environment the max # was 5.  True, there was literally only ONE box with that many logical disks... and 99.5% of the
environment 'only' had 1 local disk; but about 0.5% had 2 disks... so it "doesn't hurt" to account for your max# of logical disks, they will
only be queried if they actually exist.
#>

 

 

 

Param (
    $Namespace             = 'root\cimv2',
    $Class                 = 'cm_CustomFileInventory',
    $VerbosePreference     = 'SilentlyContinue',
    $ErrorActionPreference = 'SilentlyContinue',
    $ScriptRanDate         = [System.DateTime]::UtcNow
    )

 

Function New-WMIClassHC {
if (Get-CimClass -Namespace "$NameSpace" | Where-Object {$_.CimClassName -eq $Class} ) {
   Write-Verbose "WMI Class $Class exists"
   }

 

else {
   Write-Verbose "Create WMI Class '$Class'"
   $NewClass = New-Object System.Management.ManagementClass ("$Namespace", [String]::Empty,$Null);
   $NewClass['__CLASS']=$Class
   $NewClass.Qualifiers.Add('Static',$true)
   $NewClass.Properties.Add('FileName', [System.Management.CimType]::String,$False)
   $NewClass.Properties['FileName'].Qualifiers.Add('Key', $true)
   $NewClass.Properties.Add('FilePath', [System.Management.CimType]::String,$False)
   $NewClass.Properties['FilePath'].Qualifiers.Add('Key', $true)
   $NewClass.Properties.Add('FileVersion', [System.Management.CimType]::String,$False)
   $NewClass.Properties.Add('FileSizeKB', [System.Management.CimType]::Uint32,$False)
   $NewClass.Properties.Add('FilePath', [System.Management.CimType]::String,$False)
   $NewClass.Properties.Add('LastWriteTimeUTC', [System.Management.CimType]::DateTime,$false)
   $NewClass.Properties.Add('ScriptLastRan', [System.Management.CimType]::DateTime, $false)
   $NewClass.Put() | Out-Null
   }
   Write-Verbose "End of Trying to Create an empty $Class in $Namespace to populate later"
}

 

Write-Verbose "Delete the values in $Class in $Namespace so we can populate it cleanly. If $Class exist, you must have rights to it for this to work."
Remove-CimInstance -Namespace $Namespace -Query "Select * from $Class" -ErrorAction SilentlyContinue

 

Write-Verbose "Create $Class if it does not exist at all yet (this will only occur once per device)"
New-WMIClassHC

 

Write-Verbose "Add to the object any additional rules you may want."
Write-Verbose "localFilesystemDrives is used in case you need to 'find a file on any/all local drives'"
Write-Verbose "you may have to check how many local drives your environment has, and have enough lines to address possibilities"

 

$LocalFileSystemDrives = (Get-psdrive -PSProvider FileSystem | Where-Object {$_.DisplayRoot -notlike "\\*"})

 

$DriveLetter = ($LocalFileSystemDrives).Root[0]

 

if  ( $DriveLetter.length -eq 1) {
  $DriveLetter = $DriveLetter+':\'
}
[System.IO.FileSystemInfo[]]$files =  Get-ChildItem -Path $DriveLetter -include ('*.pst','*.foo') -Recurse -OutBuffer 1000 -ErrorAction SilentlyContinue
#------------------------
$DriveLetter1 = ($LocalFileSystemDrives).Root[1]
if  ( $DriveLetter1.length -eq 1) {
  $DriveLetter1 = $DriveLetter1+':\'
}

 

If ($LocalFileSystemDrives.count -eq 2) {
[System.IO.FileSystemInfo[]]$files +=  Get-ChildItem -Path $DriveLetter1 -include ('*.pst','*.foo') -Recurse -OutBuffer 1000 -ErrorAction SilentlyContinue
  }
#------------------------
$DriveLetter2 = ($LocalFileSystemDrives).Root[2]
if  ( $DriveLetter2.length -eq 1) {
  $DriveLetter2 = $DriveLetter2+':\'
}

 

If ($LocalFileSystemDrives.count -eq 3) {
[System.IO.FileSystemInfo[]]$files +=  Get-ChildItem -Path $DriveLetter2 -include ('*.pst','*.foo') -Recurse -OutBuffer 1000 -ErrorAction SilentlyContinue
  }
#-------------------------
$DriveLetter3 = ($LocalFileSystemDrives).Root[3]
if  ( $DriveLetter3.length -eq 1) {
  $DriveLetter3 = $DriveLetter3+':\'
}

 

If ($LocalFileSystemDrives.count -eq 4) {
[System.IO.FileSystemInfo[]]$files +=  Get-ChildItem -Path $DriveLetter3 -include ('*.pst','*.foo') -Recurse -OutBuffer 1000 -ErrorAction SilentlyContinue
  }
#-------------------------
$DriveLetter4 = ($LocalFileSystemDrives).Root[4]
if  ( $DriveLetter4.length -eq 1) {
  $DriveLetter4 = $DriveLetter4+':\'
}

 

If ($LocalFileSystemDrives.count -eq 5) {
[System.IO.FileSystemInfo[]]$files +=  Get-ChildItem -Path $DriveLetter4 -include ('*.pst','*.foo') -Recurse -OutBuffer 1000 -ErrorAction SilentlyContinue
  }
#-------------------------
 

 

#### example where you only want to look in the specific root folder, and not recursively in all subfolders.
[System.IO.FileSystemInfo[]]$files += Get-ChildItem -Path $env:windir -include ('SomeFileOnlyInWindowsFolder.txt') -OutBuffer 1000 -ErrorAction SilentlyContinue | Where-Object {$_.DirectoryName -in ($Env:windir)}

 


Write-Verbose "Populate $Class in $Namespace with the file object information as queried"
Foreach ($File in $Files) {
Write-Verbose "This section is to try to get the productversion or fileversion, if the file has that metadata"
  if (![string]::IsNullOrEmpty($File.VersionInfo.ProductVersion)) {
      $FileVersion = $File.VersionInfo.ProductVersion
    } Else
    {
      if (![string]::IsNullOrEmpty($File.VersionInfo.FileVersion)) {
         $FileVersion = $file.VersionInfo.FileVersion
         }
      else {$FileVersion = ''}
    }

 

  $Size = [uint32][math]::Round(((Get-Item $File.FullName).length / 1kb),0)

 

New-CimInstance -Namespace "$Namespace" -class $Class -argument @{
        FileName=$File.Name;
        FilePath=$File.DirectoryName;
        FileVersion=$FileVersion;
        LastWriteTimeUTC=$file.LastWriteTimeUtc;
        FileSizeKB=$Size; 
        ScriptLastRan=$ScriptRanDate
        } | Out-Null
    }

 

Write-Host "Compliant"

 

CM Disable Inventory Throttling

| Sherry Kissinger | Sherry Kissinger

Update to this previous blog:

https://tcsmug.org/blogs/sherry-kissinger/287-cm12disableinventorythrottling 

That blog entry, from 2013, was written in vbscript.  This is updated to be powershell, and to ensure that the local policy override is 'the latest version' so that it does end up being the policy in place for 'actual' settings.

Although the default for Software Inventory (File Inventory) is disabled in ConfigMgr; you perhaps have enabled Software Inventory for file inventory.  If you've done so... have you noticed that on some clients it can take hours and hours and HOURS before it finishes?  Or even on some clients it never finishes; just exits with a message that it will retry later? "The system cannot continue. Cycle will be aborted and retried."  will be in the inventoryagent.log .

There's a local policy override that you can set, on each of your clients, to change the default of inventory throttling from TRUE to FALSE.  Inventory throttling, in this case, is when you have multiple software inventory rules, like perhaps... to inventory *.exe from %programfiles%, and then another one for *.exe from c:\SomeLocalFolder.  and inbetween rule 1 and rule 2 it waits several hours to move from rule 1 to rule 2 in the inventoryagent.log

Here's a way to quickly implement (and quickly undo, if you need to) this local policy override.

--> Attached <-- are two Configuration Items you can import into your Console (rename them from .RenameToCab).  The only one you actually need is the one called "CM Client Disable Inventory Throttling".

Additionally, as .RenameToPS1, are the scripts inside the Configuration Items, in case you want to just look at them, or make your own CI.

In your CM Console, Assets and Compliance, Compliance Settings, Compliance Baselines, import that .cab file.  Now that you have it, deploy it to a test collection.  You may want to target a group of computers which you know are exhibiting the behavior in their local inventoryagent.log as mentioned above.  Make sure when you deploy the baseline, that you DO check the box about remediate.

Because software (aka, File) inventory is (in general) slow... you may want to wait a few days to see that this baseline does what you expect it to do.  Once you are satisfied with the results, it is up to you if you want to deploy this Local Policy Override to all of your Windows systems in CM.

If, at some future time, you want to take away this local policy override, import the Configuration Item "CM Client Remove Local Policy Override for Inventory Throttling".  Obviously remove the deployment of the original; and deploy the Delete.  (if both are deployed at the same time to the same machines...  those machines will get and remove, remove and then get, the local policy override... just messy.)

Thanks to Robert Hastings and Microsoft for the local policy override syntax!

MECM IIS customHeaders on Management Points post-QID 2011827

| Sherry Kissinger | Sherry Kissinger

If your internal security team require you to harden IIS, specifically in regard to QID 2011827 (a Qualys recommendation), depending upon how your security team requires you to implement QID 2011827 recommendations, you may need to set those "customHeaders" at either the root level of iis (think applicationhost.config file), or at the "Default Web Site" level.

That "usually" isn't an issue, however... IIS doesn't like it when subpages (which is what Management Points use) also have customHeaders defined. And at every upgrade of CM, the Management Point is reinstalled, and if you HAD previously cleaned up those sub-page customHeaders, to accommodate the QID 2011827 settings as required by your internal security team, you either have to perpetually fight with your security team about what might be the "right way" to implement QID 2011827... or accept defeat and then, yourself, manually, cleanup the web.config files on the ManagementPoint sites under "Default Web Site" at every upgrade.

Although this isn't ideal... let's say it's a Saturday at 3am; after you've been up for 18 hours straight fixing something else, and finally the fix was "reinstall the Management Point", and you think, yay, all done, going to go to sleep. But... you forgot about the customHeaders.

What you "could do", and obviously totally optional, is something like the 3 scripts below. This example is for the subsite of "CCM_STS", however, you would need 3 CIs. one each for "CCM_STS", "CMUserService", and "CMUserServiceWindowsAuth". You'd just change any references to CCM_STS to the other ones, once you've created and copied these.

MEMCM Inventory Installed Windows Capabilities

| Sherry Kissinger | Sherry Kissinger

It used to be (prior to 1809 Windows 10) that one could inventory the wmi class win32_optionalfeatures and know if RSAT was installed or not. Apparently that is no longer the case; and from what I could discover online, the only supported method is to use the powershell command Get-WindowsCapability -online (with additional filters if desired).

Gary Blok also found that the information for "Installed" things would also show up HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\ComponentDetect -- as subkeys. Unfortunately, that isn't easily translated into a "just a mof edit".

In order to tease out this information, it will need to be a script to populate a custom WMI class anyway, using the POSH command is much easier than parsing those regkeys. (at least, in my opinion)

If, like me, you are tasked with "Create reports to know who has RSAT feature enabled on Windows 10", this may be a solution for you.

Attached --> Here <-- is a zip file, with the script.

To use this...

  1. in your console create a new Configuration Item, call it whatever you want. Example: Inventory Staging for Get-WindowsCapability
  2. under Settings, it will be a "Script", "String"
  3. Paste in the contents of the text file. The "test for compliance" will be Existential, that any value is returned at all.
    1. OPTIONAL; the script is written presuming you only want the "Installed" results to be inventoried. If you want both installed and not installed capabilities, change the variable "$TypesToGet" from "Installed" to instead say "Both"
  4. Add this CI to a Baseline, deploy the baseline.
    1. Optional, testing. On a device which has run the baseline, using your favorite WMI viewing tool (I use WMIExplorer) go check out the local results in root\cimv2\cm_WindowsCapability. If you have results, yay, it worked.
  5. In your Console, Administration, Client Settings, Default Client Settings, right-click properties, Hardware Inventory. Set Classes... and click on "Add..." connect to that sample computer, root\cimv2, and find cm_WindowsCapability in the results, and add that. Monitor your server's dataldr.log to confirm all is well.
  6. Wait. Wait some more. After you've waited long enough, go look at select * from v_gs_cm_windowscapability0

Sample SQL query, if you are looking for Machines with RSAT Active Directory Tools

select s1.netbios_name0, wc.name0, wc.state0
from v_r_system s1
join v_gs_cm_windowscapability wc on wc.resourceid=s1.resourceid
where wc.name0 like 'Rsat.ActiveDirectory.DS-LDS.Tools%'
order by s1.netbios_name0

CM All Members of All Local Groups - Powershell

| Sherry Kissinger | Sherry Kissinger

"Back in the Day", --> Here <-- a vbscript was created to allow for ConfigMgr (version 2012 at the time was the version I was using) to be able to custom inventory the members of Local Groups. This was mostly in response to manager-type requests to know "what individuals or groups are inside the local Administrators group". That has been working fine for years... but times change, and it may be more palatable to use Powershell.  Powershell is more widely used and understood instead of vbscript. Additionally, if your company requires it, you can sign powershell scripts so you know they are tested and trusted internally.

Update on 2023-06-04.  I found that because of language localizations, the 'Administrators' group would be in Cyrillic language, and was difficult to easily sql-query.  So I determined to update the script to include the SIDs of the local group, as well as the SIDs of the the user or groups inside.  While doing so... I found that there were some devices (now that I was testing in production, instead of my 2 client lab, lol), that *if*, for example, there were orphaned IDs in the local Administrators group (or any other group, like Remote Desktop Users), the powershell command of get-localgroupmembers would error.  After some research--https://github.com/PowerShell/PowerShell/issues/2996 , this is a known issue... which apparently was promised to be 'fixed gradually', as of September 2022... but hasn't been fixed yet... and the bug has been closed.  The conclusion in that thread by people experiencing this error was to "work around the problem" by using [ADSI] instead of get-localgroupmembers.  So that's what Sherry did--a quite major re-write of the code. 

I strongly suggest that you test this in your lab environments thoroughly. Don't just blindly trust this. It is definitely a work in progress, and may have so many flaws that you'll break something, horribly. Test. Test. Test.  I also suggest you read the scripts; I did try to over-explain and add comments everywhere; but as with anything you might randomly find on the internet--I suggest you read it through first.  Know what it is and what it is trying to do, and/or test it interactively on a standalone lab box.  "Trust, but verify".

--> 2023 Version <-- is a zip file containing 2 ps1 scripts (renamed from .ps1; in case your anti-malware flags and blocks script files), a mof file to be imported.  Additionally, the zip contains the script where get-localgroupmembers is used instead of ADSI... in case, one day, Microsoft Powershell's cmd for 'get-localgroupmembers' is actually fixed to not fail (not holding my breath, it's been an issue apparently since 2017 and it's 2023 now, so...)

--> 2021 Version<-- is the version from 2021, which is a zip file containing 2 ps1 scripts (renamed from .ps1; in case your anti-malware flags and blocks script files), a mof file to be imported, and a basic sql query to get you started.  Just keeping it here for historical purposes.

How to use the attached... If you are familiar with CM Configuration Items, and the concept of "script + mof edit", the below is over-explained. If you are already familiar with the concepts, just download the attachments and set it up in your lab for testing; and once you are comfortable, deploy it as you like.

  1. In your CM console, go to Assets and Compliance, then Compliance Settings, then Configuration Items.
  2. Create a Configuration Item. When prompted, give it a name (Name is up to you and your standards. For the purposes of this information, I'm calling the Configuration Item "Inventory Staging for Local Group Members with Logging"
  3. This is a "Windows Desktop and Servers" type; you *do* want to check the box for "This configuration item contains applications settings.
    1. Add a description if you like; perhaps the link to this blog, or the date you added this, and what manager-type wanted this information; whatever might be useful 2 years down the road when the person that comes after you is trying to figure out what this is for and why.
  4. Next.
  5. Detection Methods, select "Use a custom script to detect this application". That script will be the one in the attachment, labeled "ApplicabilityForTheCI.Rename-to-ps1". What that does is it checks the client to see if it's a Domain Controller. If it *is* a Domain Controller, then the Configuration Item is NOT APPLICABLE, and it won't run the script inside. The script itself also does a check, and bails; and hopefully you will also on purpose not EVER target your domain controller(s) with this... but mistakes happen. The more places to ensure that a DC won't be asked these types of questions, the better you'll feel about having this in your environment. You certainly don't want your DCs to try to do this.
  6. Next
  7. Settings, New... Give it a Name (any name), and a description.
    1. Setting Type = Script
    2. Data Type = String
    3. Add Script...Script Language=Powershell and copy and paste in the script contained in the attachment labeled "MainScript-ADSIMethod.Rename-To-ps1"
      • Optionally... Sign the script according to your company standards
      • Optionally... Change the logging location from %temp% to the CM client log folder (it's within the script, just comment/uncomment the correct lines
      • Optionally... Turn off a local log file completely, according to your company standards.
  8. within the Settings area, at the top change from "General" to "Compliance Rules".
    1. New...Rule Type = Existential, and you want the default choice of "The specified script returns at least one value".
  9. Ok. Ok. Hit Next/Next/Next however many times until it's done and saved.
  10. In your CM Console, go to Assets and Compliance, then Compliance Settings, Configuration Baselines
  11. Create Configuration Baseline, give it a name and description; again--according to your own standards, and try to leave a good description for the person coming after you to know what this is for and why.
  12. Add, Configuration Items, and find the one you created above. Assuming you called it exactly what I called it, it'll be called "Inventory Staging for Local Group Members with Logging". Click Add, then OK.
  13. Don't hit the next OK yet. Select that name in the middle, and you want to "Change Purpose" from Required to Optional. NOW hit OK.
  14. If you don't yet have a collection of Test devices, go make a collection of test workstations and/or Server clients. Once you have a collection of devices (ideally, ones to which you have rights to look at their %temp% or cm logs remotely), Deploy this baseline to that collection. Frequency to run is up to you, but I would suggest no more frequently than every 3 days--honestly, this inventory staging isn't that important. Every 7 days is most likely fine.

  15. TEST TEST TEST

    On those test devices, trigger policy refreshes, and when the baseline appears, have it run. Depending upon which log location you set, you can check that log location for the log file. Additionally, you can use your favorite WMI browser (WMIExplorer?) to check root\cimv2\cm_localgroupmembersV2 and see if what will be reported, matches reality.

  16. Once you have confirmed it does what you want it to do, you will want to setup ConfigMgr to be able to inventory this custom WMI Class. NOTE!!!  If you have previously used this routine, the WMI Class is NEW, "CM_LocalGroupMembersV2". Every environment is different, so I can't predict what you may or may not need to do, in your environment for this customization. In general, in your CM Console, go to Administration, Client Settings, right-click "Default Client Settings", Properties, then Hardware Inventory. From the attachment, have the "ToBeImported-ADSI-v2-Method.mof" available. Set Classes... then Import the .mof.
  17. MONITOR your <server, CM installed location>\Logs\dataldr.log and confirm the mof is imported successfully, and the view is created.
  18. On those test clients (remember, you have NOT deployed this yet to most devices); wait a bit, then policy refresh. Then do a Hardware Inventory action. Monitor the client's inventoryagent.log, and hopefully you'll see the wmi query for select...from cm_localgroupmembersv2. Wait a bit for your server to process that inventory, then using SQL (or I suppose, resource explorer) to check that box' inventory--confirm the values were reported.
  19. Once you've confirmed that all the sections work--from the CI/Baseline, to inventory, then you can deploy that Baseline to the devices you want to report; that's up to you of course. all workstations? all workstations and all servers (but NOT Domain Controllers)? Just that <insert annoying internal team that always tries to bypass the rules and puts a random local user in the local Administrators group, because "they need it" (even when every company policy says to never do that, so they need to get yelled at by upper management, and you have to tell upper management who they need to yell at, using this routine)> ?

Caution... because of the bug with get-localgroupmembers, please, please read through the description of the issue https://github.com/PowerShell/PowerShell/issues/2996 . This issue affects orphaned accounts left behind... but MAY ALSO affect completely legitimate accounts or groups, for AAD joined devices.  So before you think "great, I'll use this to clean up orphaned entries left behind..." DO YOUR RESEARCH.  In that link, for example, ganlbarone says "Local admin groups on azure machines "look like" broken sids.. Even though they really arent".

Sample SQL to get you started... This would be "show me users and groups which are in the local Administrators group, where it's not "Domain Admins", nor the legitimate local Administrator account, even if it has been renamed.
select
s1.netbios_name0 as 'ComputerName'
,lgm.Account0 as 'Account or GroupName'
,lgm.sid0 as 'SID of the Account or GroupName, if known'
,lgm.Category0 as 'Category'
,lgm.Domain0 as 'Domain or Local ComputerName, if Associated with this Account'
,lgm.Enabled0 as 'if local account, is it enabled'
,lgm.name0 as 'Name of the Local Group on this device'
,lgm.Groupsid0 as 'SID of the Local Group on this device'
,lgm.Type0 as 'Type of account'
from v_GS_LocalGroupMembersV20 lgm
join v_r_system s1 on s1.resourceid=lgm.ResourceID
Where lgm.GroupSID0 = 'S-1-5-32-544'  --This is Administrators group, regardless of groupname
and right(lgm.sid0,4) <> '-512' -- Ending in -512 means it's the Domain Admins group (so it's valid)
and right(lgm.sid0,4) <> '-500' -- Ending in -500 means it's the local Administrator account (so it's valid)

Localization issues

On 2021/12/02, someone did test this in their environment, and found an issue-->  SCCM Query for local Admin - Microsoft Q&A    When I was testing in the super small lab, the only devices involved had en-US localization.  The tester in the thread  Paolo Bragagni , found that because of different localization, instead of an objectclass of "User", he would get 'Utente'  (Utente is Italian for User)

Their solution was to slightly modify a section of the script, to look for "either one" of those ObjectClass results.  

Note this only affected the ability to report on locally disabled user accounts; other elements of the script worked without modification.  There may be a better way to work around localization issues; but this "worked for them".

 if ( ($ReturnedValues.PrincipalSource -eq 'Local') -and (($ReturnedValues.ObjectClass -eq 'User') -or ($ReturnedValues.ObjectClass -eq 'Utente'))) {

------------
2023-05-31: Another Localization issue, the 'name' of Administrators would come back localized... which was fine for European-based languages (mostly), but if they were other types...it would be in SQL; but would be difficult to query against (as a European language speaker / coder), so SIDs of the Group and SIDs of the account/groups instead are attempted to be gathered.

Localization Update(s)

To address the localization issues, Sherry Kissinger modified the script slightly on 2021/12/07.  The attachment has been modified, to no longer look for the words of "local", nor "User", when checking for whether or not a user account was enabled or disabled.

To address the localizations issues (in 2023), Sherry Kissinger majorly modified the script to use [ADSI] and gather the SIDs as well.

MEMCM IIS Settings you may want for your Management Points and Distribution Points

| Sherry Kissinger | Sherry Kissinger

Over the years we've uncovered various iis settings for our Management Points and Distribution Points, which we've found needed tweaking (for a company our size and complexity). Perhaps none of these settings will be relevant in your environment. If you have some issues with your clients' ability to communicate to IIS these settings may be a starting point for your troubleshooting or remediation of your MPs and DPs.

These are all Configuration Items.
1 Test and Remediation for any ManagementPoint role servers
4 Test and Remediation for any DistributionPoint role servers

Since I've been told that trying to import an exported .cab of these CIs often fails, I'm instead going to list out every setting and script inside, instead of trying to make it "easy" by offering a .cab for import.

You'll want to make all of these CIs "Application" Type CIs. That is so that you can add all 5 rules to a baseline, and target the baseline to a collection of "all your CM Servers", without having to break up and maintain collections for "these are the MP server", and "these are the DP servers". Let the CI do the "should I bother" check, using the application detection logic.


Management Point ones--you only want your servers with the MEMCM Management Point role to deserve this CI. This is what I currently have as the application detection logic:


<#
.SYNOPSIS
This is to check if the server has a MP role
#>

Param (
$VerbosePreference = 'SilentlyContinue',
$ErrorActionPreference = 'SilentlyContinue'
)
$Value = (get-itemproperty 'HKLM:\software\Microsoft\sms\mp' | Select IISPortsList).IISPortsList
if (-not ([string]::IsNullOrEmpty($Value))) {
write-host $Value
}

Distribution Point Ones--you only want your servers with the MEMCM DP role and IIS to deserve these 4 CIs. This is what I currently have as the application detection logic for the 4 CIs for the DP ones:

 


<#
.SYNOPSIS
This is to check if the server has a DP role
#>

Param (
$VerbosePreference = 'SilentlyContinue',
$ErrorActionPreference = 'SilentlyContinue',
$WebServerInstalled = (Get-WindowsFeature -Name Web-Server).InstallState
)
$Value = (get-itemproperty 'HKLM:\software\Microsoft\sms\dp' | Select ContentLibraryPath).ContentLibraryPath
if (-not ([string]::IsNullOrEmpty($Value)) -and ($WebServerInstalled -eq 'Installed')) {
write-host $Value
}


For each individual CI...
The single Management Point role CI is this:

  1. applicationPoolDefaults queueLength should be 4000
    Script, Integer
    Why is this needed? IIS default out of the box is 1000. MEMCM supports 4000. the reason you want the max is if you have a lot of clients (more than 1000) all trying to communicate to the server, the machines over 1000 may get communication failures. This can result in clients not able to download policy, nor able to transmit information to the Management Point.
    1. Discovery Script:
      import-Module webadministration
      (Get-WebConfiguration /system.applicationHost/applicationPools/applicationPoolDefaults).queueLength
    2. Remediation Script
      import-Module webadministration
      Set-WebConfigurationProperty /system.applicationHost/applicationPools/applicationPoolDefaults -Name queueLength -value 4000
    3. Compliance Rule is that this will be an Integer of 4000
      1. Make sure you check that box about 'Run the specified remediation script when this setting is noncompliant' (if you forget, then even if you deploy the baseline w/remediation, it still won't remediate)

So... that was the easy one; just the MP role one; to allow for more clients to chat. Distribution Point IIS settings; we've had to tweak multiple things over the years. The following 4 things are for your DP Role Servers.  The next 4 CIs would be ones you create using "application detection logic" of a DP role (mentioned above)

  1. applicationPoolDefaults queueLength should be 4000
    Script, Integer
    Why is this needed? IIS default out of the box is 1000. MEMCM supports 4000. the reason you want the max is if you have a lot of clients (more than 1000) all trying to communicate to the server, the machines over 1000 may get communication failures. This can result in clients not able to download content.
    1. Discovery Script:
      import-Module webadministration
      (Get-WebConfiguration /system.applicationHost/applicationPools/applicationPoolDefaults).queueLength
    2. Remediation Script
      import-Module webadministration
      Set-WebConfigurationProperty /system.applicationHost/applicationPools/applicationPoolDefaults -Name queueLength -value 4000
    3. Compliance Rule is that this will be an Integer of 4000
      1. Make sure you check that box about 'Run the specified remediation script when this setting is noncompliant' (if you forget, then even if you deploy the baseline w/remediation, it still won't remediate)
  2. SMS Distribution Points Pool appConcurrentRequestLimit should be 65535
    Script, Integer
    Why is this needed? If it's not max allowed, what could happen is 503.2 IIS errors on the Distribution Points, this alleviates those errors.
    1. Discovery Script
      <#
      .SYNOPSIS
      Query applicationHost.config, <configuration> , <system.webServer>,
      change <serverRuntime /> for appConcurrentRequestLimit
      .DESCRIPTION
      Query applicationhost.config, <configuration> <system.webServer>, <serverRuntime />
      .NOTES
      Why: Part of alleviate the 503.2 IIS errors on the Management Points
      2019-12-05 Sherry Kissinger
      .EXAMPLES
      #>
      $VerbosePreference = 'SilentlyContinue'
      $ErrorActionPreference = 'SilentlyContinue'
      Import-Module WebAdministration
      (Get-WebConfigurationProperty -pspath 'MACHINE/WEBROOT/APPHOST' -filter "system.webServer/serverRuntime" -name "appConcurrentRequestLimit").Value
    2. Remediation Script
      <#
      .SYNOPSIS
      Edit applicationHost.config, <configuration> , <system.webServer>,
      change <serverRuntime />
      to <serverRuntime appConcurrentRequestLimit="65535" />
      .DESCRIPTION
      Modify applicationhost.config, <configuration> <system.webServer>, <serverRuntime />
      .NOTES
      Why: alleviate the 503.2 IIS errors on the Management Points
      2019-12-05 Sherry Kissinger
      .EXAMPLES
      #>
      $VerbosePreference = 'SilentlyContinue'
      $ErrorActionPreference = 'SilentlyContinue'
      Import-Module WebAdministration
      Set-WebConfigurationProperty -pspath 'MACHINE/WEBROOT/APPHOST' -filter "system.webServer/serverRuntime" -name "appConcurrentRequestLimit" -value 65535
    3. what means compliant: 65535
      1. Make sure you check that box about 'Run the specified remediation script when this setting is noncompliant' (if you forget, then even if you deploy the baseline w/remediation, it still won't remediate)
  3. SMS Distribution Points Pool RapidFail Should be Disabled
    Script, String
    why is this needed? iis defaults to Stopping (and not restarting) Application Pools if "too many" errors are encountered. Well, in an environment our size... we get errors all the time. We'd rather client keep trying to communicate, even if it generates iiserrors. We certainly don't want the application pools to stop.
    1. Discovery Script
      $VerbosePreference = 'SilentlyContinue'
      $ErrorActionPreference = 'SilentlyContinue'
      import-Module webadministration
      (get-itemproperty 'IIS:\AppPools\SMS Distribution Points Pool' -name failure.rapidFailProtection).Value
    2. Remediation Script
      $VerbosePreference = 'SilentlyContinue'
      $ErrorActionPreference = 'SilentlyContinue'
      import-Module webadministration
      set-Itemproperty 'IIS:\AppPools\SMS Distribution Points Pool' -name failure.rapidFailProtection False
    3. what means compliant, the returned value = False
      1. Make sure you check that box about 'Run the specified remediation script when this setting is noncompliant' (if you forget, then even if you deploy the baseline w/remediation, it still won't remediate)
  4. SMS Distribution Points No FileExtensionFilters
    Script, Integer
    why is this needed? by default, IIS will filter some file extensions. For us, occasionally files within content attempting to be downloaded would include files with those exact extensions, like a .mdb or .vb or .config, etc. etc. This would result in the client claiming "hash mismatch", because quite correctly IIS had a Request Filtering rule denying the ability to download a file from IIS ending in .mdb / .vb / whatever. But... we *DO* need the ability for files of that type to be downloaded into cache; if that is what is in the source files for an application, that is what we need to support. This will remove all fileextension filters, if there is a DP role.
    1. Discovery Script
      $VerbosePreference = 'SilentlyContinue'
      $ErrorActionPreference = 'SilentlyContinue'
      import-Module webadministration
      $CountFileExtensionFilters = (Get-WebConfigurationProperty -Filter 'System.WebServer/Security/requestFiltering/fileExtensions' -PSPath 'IIS:\Sites\Default Web Site' -Name 'Collection' | Measure-Object).Count
      Write-Host $CountFileExtensionFilters
    2. Remediation Script
      $VerbosePreference = 'SilentlyContinue'
      $ErrorActionPreference = 'SilentlyContinue'
      import-Module webadministration
      Remove-WebConfigurationProperty -Filter 'System.WebServer/Security/requestFiltering/fileExtensions' -PSPath 'IIS:\Sites\Default Web Site' -Name 'Collection'
    3. what means compliant, equals  0
      1. Make sure you check that box about 'Run the specified remediation script when this setting is noncompliant' (if you forget, then even if you deploy the baseline w/remediation, it still won't remediate)

 

Then of course.. TEST TEST TEST.

Add these 5 new CIs to a Baseline, and deploy to a single server with one of the roles; and see "what would happen if...".  If you are satisfied it might be helpful, you can then delete the deployment, and redeploy "with remediation", and test again.

 

Reporting on Attached Monitor info as available in WMIMonitorID

| Sherry Kissinger | Sherry Kissinger

I thought this information was already blogged by someone else--I certainly know I stole it from someone else years ago. But now I can't find that blog. If this is your work; please accept my apologies for not crediting you correctly.

Reporting on "Attached Monitors" is occasionally something which your business requests. The best solution in my humble opinion is from Enhansoft.com, part of their Reporting suite includes a client-deployed utility for exhaustively being able to report on attached monitor information https://www.enhansoft.com/products-services/enhansoft-reporting/ 

However, if you don't have any budget at all; but are still tasked with getting "attached monitor information", although it's a poor second, you can get 'some information' out of a built-in WMI class. https://docs.microsoft.com/en-us/windows/win32/wmicoreprov/wmimonitorid 

Step 1: In your CM Console, Administration, Client Settings, right-click on Default Client Settings, select Properties. Go to the "hardware inventory" on the left, then "Set Classes..." on the right. Choose "Add..." and you want to connect to <some computer you have admin rights on> root\wmi (not root\cimv2), and find "WMIMonitoID". Add that. hit OK til out. Monitor <your server>\logs\dataldr.log; to see it being created, and the view created. Take note of the view which was created in your environment.

Step 2: Wait. You are waiting for clients to get this new policy, and then report hardware inventory using this new policy. Depending upon your environment, this could be minutes to hours to even a week--only you know your own environment and timings.

Step 3: Below is sql code to pull out the information from the 'not too friendly' info in that wmi class. Just make sure you replace the v_gs_wmimonitorID0 view with what it REALLY is for your environment. Your environment might not have called the view v_gs_wmimonitorID0; again; that could be unique to your environment.

If(OBJECT_ID('tempdb..#TempMonInfo') Is Not Null)
Begin
Drop Table #TempMonInfo
End

--###############################################
--Create #Temp Table, and insert specific data
--Data will be used later in the report
--##############################################

create table #TempMonInfo(
ResourceID int,
UserFriendlyName0 nvarchar(255),
UserFriendlyNameLength0 int,
UserFriendlyNameConv varchar(255),
ManufacturerName0 nvarchar(255),
ProductCodeID0 nvarchar(255),
SerialNumberID0 nvarchar(255),
WeekOfManufacture0 int,
YearOfManufacture0 int
)

insert Into #TempMonInfo
(ResourceID, ManufacturerName0, ProductCodeID0, SerialNumberID0, WeekOfManufacture0, YearOfManufacture0,
UserFriendlyName0, UserFriendlyNameLength0)
select
ResourceID, ManufacturerName0, ProductCodeID0, SerialNumberID0, WeekOfManufacture0, YearOfManufacture0,
UserFriendlyName0, UserFriendlyNameLength0
from v_GS_WMIMonitorID0

;WITH n AS
(
SELECT
NUMBER = ROW_NUMBER() OVER (ORDER BY s1.[object_id])
FROM sys.all_objects AS s1, sys.all_objects AS s2
)
, final as (
SELECT
MON.ResourceID,
MON.UserFriendlyName0,
CONV_FN.VAL AS UserFriendlyNameConverted,
MON.ManufacturerName0 AS [Make],
CONV_MAKE.VAL AS MakeConverted,
MON.ProductCodeID0 AS [ProductCode],
MON.SerialNumberID0 AS [SerNum],
CONV_SN.VAL AS SerNumConverted,
MON.YearOfManufacture0 AS [YearOfManufacture],
MON.WeekOfManufacture0 AS [WeekOfManufacture]
FROM #TempMonInfo MON
CROSS APPLY ( SELECT
CASE
WHEN UserFriendlyName0 LIKE '%,%'
THEN (SELECT CHAR([value]) FROM (SELECT [Value] = SUBSTRING(MON.UserFriendlyName0, [Number],CHARINDEX(',', MON.UserFriendlyName0 + ',', [Number]) - [Number]) FROM n WHERE Number <= LEN(MON.UserFriendlyName0) AND SUBSTRING(',' + MON.UserFriendlyName0, [Number], 1) = ',') SPLT WHERE [value] > 20 FOR XML PATH(''),TYPE).value('.','varchar(max)')
ELSE UserFriendlyName0
END AS VAL) CONV_FN
CROSS APPLY ( SELECT
CASE
WHEN MON.ManufacturerName0 LIKE '%,%'
THEN (SELECT CHAR([value]) FROM (SELECT [Value] = SUBSTRING(MON.ManufacturerName0, [Number] ,CHARINDEX(',', MON.ManufacturerName0 + ',', [Number]) - [Number]) FROM n WHERE Number <= LEN(MON.ManufacturerName0) AND SUBSTRING(',' + MON.ManufacturerName0, [Number], 1) = ',') SPLT WHERE [value] > 20 FOR XML PATH(''),TYPE).value('.','varchar(max)')
ELSE MON.ManufacturerName0
END AS VAL) CONV_MAKE
CROSS APPLY ( SELECT
CASE
WHEN MON.SerialNumberID0 LIKE '%,%'
THEN (SELECT CHAR([value]) FROM (SELECT [Value] = SUBSTRING(MON.SerialNumberID0, [Number],CHARINDEX(',', MON.SerialNumberID0 + ',', [Number]) - [Number]) FROM n WHERE Number <= LEN(MON.SerialNumberID0) AND SUBSTRING(',' + MON.SerialNumberID0, [Number], 1) = ',') SPLT WHERE [value] > 20 FOR XML PATH(''),TYPE).value('.','varchar(max)')
ELSE MON.SerialNumberID0
END AS VAL) CONV_SN
)

Select
s1.Netbios_Name0 as 'Computername',
final.resourceid,
final.makeconverted
, UserFriendlyNameConverted
, final.ProductCode,
final.SerNumConverted, final.WeekOfManufacture, final.YearOfManufacture
,case when makeconverted = 'aaa' then 'Asus'
when makeconverted= 'ACI' then 'Asus'
when makeconverted= 'ACR' then 'Acer'
when makeconverted= 'APP' then 'Apple'
when makeconverted= 'ATL' then 'Atlona'
when makeconverted= 'BBY' then 'Insignia'
when makeconverted= 'BNQ' then 'Benq'
when makeconverted= 'CPQ' then 'Compaq'
when makeconverted= 'DCL' then 'DCLCD'
when makeconverted= 'DEL' then 'Dell'
when makeconverted= 'ELE' then 'Element'
when makeconverted= 'ELM' then 'Doublesight'
when makeconverted= 'EMA' then 'eMachines'
when makeconverted= 'ENC' then 'Eizo'
when makeconverted= 'EPI' then 'Envision'
WHEN makeconverted= 'FNI' then 'FUNAI/SYLVANIA'
when makeconverted= 'GSM' then 'LG'
when makeconverted= 'GWY' then 'Gateway'
when makeconverted= 'HKC' then 'V7'
when makeconverted= 'HPN' then 'HP'
when makeconverted= 'HRE' then 'Haier'
when makeconverted= 'ACR' then 'Acer'
when makeconverted= 'HSD' then 'Hanns.G'
when makeconverted= 'ACR' then 'Acer'
when makeconverted= 'HSP' then 'Hannspree'
when makeconverted= 'HTC' then 'Hitachi'
when makeconverted= 'HWP' then 'HP'
when makeconverted= 'IFS' then 'Infocus'
when makeconverted= 'IZI' then 'Vizio'
when makeconverted= 'LEN' then 'Lenovo'
when makeconverted= 'MED' then 'Medion'
when makeconverted= 'MEL' then 'NEC/Mitsubishi'
when makeconverted= 'NOK' then 'Nokia'
when makeconverted= 'PGS' then 'Princeton'
when makeconverted= 'PHL' then 'Philips'
when makeconverted= 'PLN' then 'Planar'
when makeconverted= 'PNR' then 'Planar'
when makeconverted= 'PTS' then 'Proview'
when makeconverted= 'SAM' then 'Samsung'
when makeconverted= 'SEK' then 'Seiki'
when makeconverted= 'SHP' then 'Sharp'
when makeconverted= 'SNY' then 'Sony'
when makeconverted= 'SPT' then 'Sceptre'
when makeconverted= 'SYN' then 'Olevia'
when makeconverted= 'TSB' then 'Toshiba'
when makeconverted= 'UPS' then 'Upstar'
when makeconverted= 'VIZ' then 'Vizio'
when makeconverted= 'VSC' then 'ViewSonic'
when makeconverted= 'WDE' then 'Westinghouse'
when makeconverted= 'WDT' then 'Westinghouse'
when makeconverted= 'WET' then 'Westinghouse'
else MakeConverted
end as 'BestGuessMake'
from Final
join v_R_System_Valid s1 on s1.resourceid=final.resourceid
-- Filtering out some Makes known to not be relevant ... at least when this report was created years ago.
-- Comment out the next line if you want these things anyway.
and Final.Make not in ('AUO','BOE','SEC','SDC','LGD','CMN','64, 64, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0','77, 83, 95, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0','88, 72, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0')

If(OBJECT_ID('tempdb..#TempMonInfo') Is Not Null)
Begin
Drop Table #TempMonInfo
END

ConfigMgr Inventory of Powershell Versions

| Sherry Kissinger | Sherry Kissinger

If you happen to be curious about what versions of Powershell are installed/available on your clients, here's one way to pull out the information.  Note that the regkey locations for some of this information has changed from version 2 to higher versions, so it's completely possible that a future update to Powershell and the regkey location will change again; so if that happens a modification to these .mof files will be necessary.  As of Windows 8.1; these worked to report versions of Powershell installed.

Take the --> attached <-- and inside are two .mof files.  If you are ConfigMgr 2012, place the contents of the 'posh-configuration.mof.txt' at the bottom of your <inbox location>\clifiles.src \hinv\configuration.mof file.  In your configMgr 2012 console, in Client Settings, Default Agent Settings, Hardware Inventory, Classes... Import the 'posh-to-be-imported.mof'

Wait for clients to start reporting, once you get some clients reporting, the below sql query should get you started:

;with CTE as (
  select distinct resourceid
   ,RTRIM(substring(ISNULL((select ','+PSCompatibleVersion0  
        from v_GS_PowerShell0 p1
        where p1.ResourceID=t2.resourceid for XML path ('')),' '),2,2000)) as PSCompatibleVersions0
   ,RTRIM(substring(ISNULL((select ','+PowerShellVersion0
        from v_GS_PowerShell0 p1  where p1.ResourceID=t2.resourceid for XML path ('')),' '),2,2000)) as PowerShellVersions0
   ,RTRIM(substring(ISNULL((select ','+RuntimeVersion0
        from v_GS_PowerShell0 p1  where p1.ResourceID=t2.resourceid for XML path ('')),' '),2,2000)) as RunTimeVersions0
 from v_R_System t2
)
   select distinct sys1.netbios_name0 [ComputerName]
 ,cte.RunTimeVersions0 [RunTime Versions]
 ,cte.PSCompatibleVersions0 [PS Compatible Versions]
 ,cte.PowerShellVersions0 [PowerShell Versions]
 from v_R_System sys1
 left join CTE on cte.ResourceID=sys1.ResourceID

 

 

MEMCM Keep a System Group active without re- Group Discovery

| Sherry Kissinger | Sherry Kissinger

Background for context:

I happen to work at a large company, which has more than 300,000 employees. Using Enterprise Client Management (MEMCM), we often deploy 'free' Software to the majority of users (think something like Adobe Reader, or Google Chrome). This is so that as soon as <new employee> logs into a workstation, they can go to Software Center, and install software they might need to perform their job.

How we accomplish this is all new users are added to a group called (for purposes of this blog) "SC_All_Employees".
That domain group is (used to be, until this workaround) discovered using Group Discovery. If you are unfamiliar with group discovery, in your MEMCM console, Administration, Hierachy Configuration, Discovery Methods, Active Directory Group Discovery, and in Discovery Scopes, is a single rule for this: Group, I had browsed for the group name, and it resolved to
Distinguished Name: CN=SC_All_Employees,CN=CompanyGroups,DC=MyCompany,DC=ORG
GroupName = SC_All_Employees
GroupType = Security Group - Global

The collection query (WQL) is this (selecting Usergroups, not users, when creating the collection query)
Select SMS_R_UserGroup.ResourceID
from SMS_R_UserGroup
Where
SMS_R_UserGroup.Unique_UserGroup_Name0 = "MyCompany\\SC_All_Employees"

This results in there being ONE and only ONE resourceid in the collection, the resourceid for the Group, not the resource ids for the users who might be in that group. (This is important)

Why do we like use / use this? Because it's all then based on one single thing being updated--Active Directory. Add a user to the group, that user authenticates to Active Directory, and the token for that ad group membership is attached to that login, and CM can tell and use that group SID to check if they deserve any policies...Policies that for us, result in things being available in Software Center. It can literally be a minute between adding a user in AD, the users locks/unlocks their workstations, the user launches Software Center, and voila, the stuff is visible. It's wicked fast--to the end user.



The Dilemma:

As of ECM current branch 2006 (and it has been this way for decades), when one discovers groups which happen to be Security Groups, it is impossible to NOT discover the users inside the group. If you watch your ADsgdis.log on your primary site, you'll see it discover the group...and then within a few minutes discover all the users in that group. That's fine if your strategy for collection creation is to have a collection query like this:

Select SMS_R_User.Resourceid from SMS_R_User
where SMS_R_user.UserGroupName = "MyCompany\\SC_All_Employees"

That's limiting to USERS, not USERGROUPS.

However, that isn't what we at this large company need or desire. Having to do delta discovery and have CM create the user to group relationships is not ideal at our size. So we don't even WANT to record the User-to-Group relationships in CM. We want just and only the group, group SID, and that one, single resourceid.

When CM has to discover all 300,000+ users in that group, and create those relationships, it causes replication delays, and backlogs in ddr processing. It's a strain on the system for literally no reason we want to have.

So you think; so what; just have it discover the group once, and then turn it off, it'll be there forever right? Nope; by design there is a task for "Delete Aged Discovery Records". So let's say you have that set to 90 days. If you turn off discovery of the group "SC_All_Employees", in 91 days that resourceid will be removed (by design, and in general that is a good thing), and you have to re-discover it again.

There is a uservoice for this; so until they fix it, if this is happening to you, please vote it up.

https://configurationmanager.uservoice.com/forums/300492-ideas/suggestions/11096859-ad-group-discovery-discovering-group-members


The Totally Unsupported and Do Not Do It Workaround (so if you do this, it's not my fault, I told you not to do this).

In 2 labs, and then production, this worked, to "keep alive" a Group...once it was discovered once; and NOT have it be automatically removed after the period you have defined for "Delete Aged Discovery Records".

If you have ANY hesitation about this at ALL, don't do it. Don't even think about doing it. If you think you might want to do this anyway, do this in your LAB environment first. Don't have a lab? Make one. There are several guides on making a CM lab using virtual machines.

SO... you decided to do this anyway, even though I said it's unsupported, and <insert deity here> help you if you mess something up... you have a backup of your environment, right?


1) *do* take the replication hit, and DDR processing hit once, for the group "SC_All_Employees" (insert your own group here, whatever it is).
2) remove that rule from Group Discovery.

3) Query to look at what the values are "now" (before you do any testing).

DECLARE @RID BIGINT = (Select Resourceid from v_r_userGroup ug where ug.Unique_Usergroup_Name0 like '%SC_All_Employees')
Select U.ResourceID, U.Name0, U.Creation_Date, U.Windows_NT_Domain0 from v_r_usergroup u where [email protected]
Select * from DiscItemAgents dia where [email protected]
Select * from DiscItemAgents_Local dial where [email protected]
Select count(fcm.collectionid) as 'Count of Collections where this group is a member'
from v_fullCollectionMembership fcm where [email protected]

4) Set up a SQL job to "keep alive" that specific group. You see... deep in sql is where CM records which discovered resourceids should be marked for deletion at the next Delete Aged Discovery Records routine. This circumvents that process.... by cheating SQL into thinking it *has* been recently discovered; and not to cull it.

The SQL Job runs on your primary site Server (that has the SQL database CM_..., and did the Group Discovery in step 1 above).
We currently have it run twice daily (likely only needs to run maybe weekly, but I was testing this routine)
and run it in the cm_ database (when you set up the job, you have to say which database)

The sql inside that job is below; note the DECLARE @RID; make sure you put in your correct group.  This blog might also put 'smart quotes' around things, or have line breaks where I didn't mean to have line breaks.  Remember the above warning where I said don't do this if you have any reservations?  Yeah... be careful what you do. Also note the double single quotes ( ' ' ) ; that's because of the sql job needing the double single quotes. If you are going to run this interactively for testing, you may need to remove one of the single quotes in each instance.
You may want to run this interactively against your cm_... database, for testing before making it a recurring sql agent job.


--Get ResourceID, current utc time, groupname for the log, and the current value of DueForAgeOut
DECLARE @RID BIGINT = (Select Resourceid from v_r_userGroup ug where ug.Unique_Usergroup_Name0 like ''%SC_All_Employees'')
DECLARE @NOW DATETIME = GETUTCDATE()
DECLARE @SiteCode nvarchar(3) = (Select Right(db_name(),3))
DECLARE @GroupToUpdate nvarchar(80) = (Select Unique_UserGroup_Name0 from v_r_usergroup where [email protected])
DECLARE @CurrentDueForAgeOut int = (Select DueForAgeOut from DiscItemAgents where [email protected] and [email protected])

--Update the _local with current utc date, and log
UPDATE [DiscItemAgents_Local]
SET AgentTime = @NOW
Where ITEMKEY = @RID
DECLARE @VALUE nvarchar(max) = (@GroupToUpdate + '' has been updated to '' + CAST(@NOW as varchar) + '' in the DiscItemAgents_Local Table.'')
RAISERROR (@VALUE,1,1) with LOG

--Depending upon if it''s currently Null or not, set DiscItemAgents to either Null, or 0 if already not-0. Values possible
--are Null, 0, or 1. 1 is the value which triggers deleting the record when the task for Delete Aged DDR records runs.

IF @CurrentDueForAgeOut IS Null
BEGIN
UPDATE [DiscItemAgents]
Set DueForAgeOut = NULL
, AgentTime = @NOW
Where ITEMKEY = @RID and [email protected]

DECLARE @VALUE2 nvarchar(max) = (@GroupToUpdate + '' has been updated in the DiscItemAgents Table with these values ''+ CAST(@NOW as varchar) + '', DueForAgeOut to NULL.'')
RAISERROR (@VALUE2,1,1) with LOG
END
ELSE
BEGIN
UPDATE [DiscItemAgents]
Set DueForAgeOut = 0
, AgentTime = @NOW
Where ITEMKEY = @RID and [email protected]

DECLARE @VALUE3 nvarchar(max) = (@GroupToUpdate + '' has been updated in the DiscItemAgents Table with these values ''+ CAST(@NOW as varchar) + '', DueForAgeOut to 0.'')
RAISERROR (@VALUE3,1,1) with LOG
END

 


5) Monitor the job's success by looking at your SQL logs (Using SQL Server Management Studio (SSMS), connect to your primary site server that houses your cm_ database, go to +Management, +SQL Server Logs, then double-click "Current", if you have the above running successfully, you'll see entries similar to this (the group name, and time will be different for your environment:

MyDomain\SC_All_Employees has been updated in the discItemAgents Table with these values Jan 25 2021 5:35PM, DueForAgeOut to Null
MyDomain\SC_All_Employees has been updated to Jan 25 2021 5:35PM in the DiscItemAgents_Local Table


6) PARANOIA STEPS

Make yourself reminders to check these; and confirm it's keeping it alive:

DECLARE @RID BIGINT = (Select Resourceid from v_r_userGroup ug where ug.Unique_Usergroup_Name0 like '%SC_All_Employees')
Select U.ResourceID, U.Name0, U.Creation_Date, U.Windows_NT_Domain0 from v_r_usergroup u where [email protected]
Select * from DiscItemAgents dia where [email protected]
Select * from DiscItemAgents_Local dial where [email protected]
Select count(fcm.collectionid) as 'Count of Collections where this group is a member'
from v_fullCollectionMembership fcm where [email protected]

What means a problem has happened?
If the group is just plain gone, and the 'Count of Collections where this group is a member' = 0

That means something deleted that group--whether it was a human literally going into the console, right-click and delete the group (oops!!!) or the Delete Aged Discovery Records cleared it out, you then have to decide... do you still need that group; or was it retired on purpose? If not retired on purpose, most likely you'll have to re-take the DDR hit, by re-discovering the group again in Group Discovery, and wait for your DDR backlog and/or replication backlog to clear after that; and check this routine works.

7) What if the Uservoice is addressed in a future version, and there is a way to NOT discover the members inside a security group?
- If so, create the Group Discovery for this group, and do whatever the guidance is to say "just the group please, not the members inside the group"
- Disable this sql Agent job--you don't need to run it ever again, if ECM Current Branch has it natively.  Could probably also just delete the sql job completely, if the product has it natively.