Bypass Powershell ExecutionPolicy

In attempting to do some Powershell (WinRM) remote actions, specifically using  Roger Zander's Collection Commander, I came across this blog entry and thought "Awesome, already done for me!".  http://www.verboon.info/2014/12/installing-software-using-collection-commander/

And then I kept getting errors during testing, "Exception calling "Install" : ""  But it would work fine in the home lab... After much head scratching, at work we have a GPO to set Powershell ExecutionPolicy as RemoteSigned--which is good, of course.  But it threw this particular script for a loop.  In the home lab--since it is a home lab--I had set executionpolicy to unrestricted on the test box.

What I ended up doing was I found this blog post about different ways to get around a remote-signed execution policy (in a good way, not trying to do evil things):  https://blog.netspi.com/15-ways-to-bypass-the-powershell-execution-policy/

The one which was the easiest to implement for these specific needs was the "Bypassing in Script" one detailed here:
http://www.nivot.org/blog/post/2012/02/10/Bypassing-Restricted-Execution-Policy-in-Code-or-in-Script

CM All Members of All Local Groups - Powershell

"Back in the Day", --> Here<-- a vbscript was created to allow for ConfigMgr (version 2012 at the time was the version I was using) to be able to custom inventory the members of Local Groups. This was mostly in response to manager-type requests to know "what individuals or groups are inside the local Administrators group". That has been working fine for years... but times change, and it may be more palatable to use Powershell.  Powershell is more widely used and understood instead of vbscript. Additionally, if your company requires it, you can sign powershell scripts so you know they are tested and trusted internally.

**WARNING** as of August, 2021... This has been tested just in ONE and only ONE small lab environment, with only 2 clients. If/when I hear from people that this works "fine", I'll update this blog. Until that time...I strongly suggest that you test this in your lab environments thoroughly. Don't just blindly trust this. It is definitely a work in progress, and may have so many flaws that you'll break something, horribly. Test. Test. Test.  I also suggest you read the scripts; I did try to over-explain and add comments everywhere; but as with anything you might randomly find on the internet--I suggest you read it through first.  Know what it is and what it is trying to do, and/or test it interactively on a standalone lab box.  "Trust, but verify".

--> Attached<-- is a zip file containing 2 ps1 scripts (renamed from .ps1; in case your anti-malware flags and blocks script files), a mof file to be imported, and a basic sql query to get you started.

How to use the attached... If you are familiar with CM Configuration Items, and the concept of "script + mof edit", the below is over-explained. If you are already familiar with the concepts, just download the attachments and set it up in your lab for testing; and once you are comfortable, deploy it as you like.

  1. In your CM console, go to Assets and Compliance, then Compliance Settings, then Configuration Items.
  2. Create a Configuration Item. When prompted, give it a name (Name is up to you and your standards. For the purposes of this information, I'm calling the Configuration Item "Inventory Staging for Local Group Members with Logging"
  3. This is a "Windows Desktop and Servers" type; you *do* want to check the box for "This configuration item contains applications settings.
    1. Add a description if you like; perhaps the link to this blog, or the date you added this, and what manager-type wanted this information; whatever might be useful 2 years down the road when the person that comes after you is trying to figure out what this is for and why.
  4. Next.
  5. Detection Methods, select "Use a custom script to detect this application". That script will be the one in the attachment, labeled "ApplicabilityForTheCI.Rename-to-ps1". What that does is it checks the client to see if it's a Domain Controller. If it *is* a Domain Controller, then the Configuration Item is NOT APPLICABLE, and it won't run the script inside. The script itself also does a check, and bails; and hopefully you will also on purpose not EVER target your domain controller(s) with this... but mistakes happen. The more places to ensure that a DC won't be asked these types of questions, the better you'll feel about having this in your environment. You certainly don't want your DCs to try to do this.
  6. Next
  7. Settings, New... Give it a Name (any name), and a description.
    1. Setting Type = Script
    2. Data Type = String
    3. Add Script...Script Language=Powershell and copy and paste in the script contained in the attachment labeled "MainScript.Rename-To-ps1"
      • Optionally... Sign the script according to your company standards
      • Optionally... Change the logging location from %temp% to the CM client log folder (it's within the script, just comment/uncomment the correct lines
      • Optionally... Turn off a local log file completely, according to your company standards.
  8. within the Settings area, at the top change from "General" to "Compliance Rules".
    1. New...Rule Type = Existential, and you want the default choice of "The specified script returns at least one value".
  9. Ok. Ok. Hit Next/Next/Next however many times until it's done and saved.
  10. In your CM Console, go to Assets and Compliance, then Compliance Settings, Configuration Baselines
  11. Create Configuration Baseline, give it a name and description; again--according to your own standards, and try to leave a good description for the person coming after you to know what this is for and why.
  12. Add, Configuration Items, and find the one you created above. Assuming you called it exactly what I called it, it'll be called "Inventory Staging for Local Group Members with Logging". Click Add, then OK.
  13. Don't hit the next OK yet. Select that name in the middle, and you want to "Change Purpose" from Required to Optional. NOW hit OK.
  14. If you don't yet have a collection of Test devices, go make a collection of test workstations and/or Server clients. Once you have a collection of devices (ideally, ones to which you have rights to look at their %temp% or cm logs remotely), Deploy this baseline to that collection. Frequency to run is up to you, but I would suggest no more frequently than every 3 days--honestly, this inventory staging isn't that important. Every 7 days is most likely fine.

  15. TEST TEST TEST

    On those test devices, trigger policy refreshes, and when the baseline appears, have it run. Depending upon which log location you set, you can check that log location for the log file. Additionally, you can use your favorite WMI browser (WMIExplorer?) to check root\cimv2\cm_localgroupmembers and see if what will be reported, matches reality.

  16. Once you have confirmed it does what you want it to do, you will want to setup ConfigMgr to be able to inventory this custom WMI Class. NOTE!!! if you have previously used the vbscript Configuration Item; this is the exact same wmi class name--it may be that you will first have to delete the existing "CM_LocalGroupMembers". Every environment is different, so I can't predict what you may or may not need to do, in your environment for this customization. In general, in your CM Console, go to Administration, Client Settings, right-click "Default Client Settings", Properties, then Hardware Inventory. From the attachment, have the "ToBeImported.mof" available. Set Classes... then Import the .mof.
  17. MONITOR your <server, CM installed location>\Logs\dataldr.log and confirm the mof is imported successfully, and the view is created.
  18. On those test clients (remember, you have NOT deployed this yet to most devices); wait a bit, then policy refresh. Then do a Hardware Inventory action. Monitor the client's inventoryagent.log, and hopefully you'll see the wmi query for select...from cm_localgroupmembers. Wait a bit for your server to process that inventory, then using SQL (or I suppose, resource explorer) to check that box' inventory--confirm the values were reported.
  19. Once you've confirmed that all the sections work--from the CI/Baseline, to inventory, then you can deploy that Baseline to the devices you want to report; that's up to you of course. all workstations? all workstations and all servers (but NOT Domain Controllers)? Just that <insert annoying internal team that always tries to bypass the rules and puts a random local user in the local Administrators group, because "they need it" (even when every company policy says to never do that, so they need to get yelled at by upper management, and you have to tell upper management who they need to yell at, using this routine)> ?

Sample SQL to get you started... This would be "show me users and groups which are in the local Administrators group, where it's not "Domain Admins"
select
s1.netbios_name0 as 'ComputerName'
,lgm.Account0 as 'Account or GroupName'
,lgm.Category0 as 'Category'
,lgm.Domain0 as 'Domain or Local ComputerName, if Associated with this Account'
,lgm.Enabled0 as 'if local account, is it enabled'
,lgm.name0 as 'Name of the Local Group on this device'
,lgm.Type0 as 'Type of account according to Get-LocalGroupMember, PrincipalSource'
from v_GS_LocalGroupMembers0 lgm
join v_r_system s1 on s1.resourceid=lgm.ResourceID
Where lgm.Name0 = 'Administrators'
and lgm.account0 not in ('Domain Admins')

Or... "within the local Administrators group, show me groups which aren't Domain Admins, and anything else which is enabled or null"
select
s1.netbios_name0 as 'ComputerName'
,lgm.Account0 as 'Account or GroupName'
,lgm.Category0 as 'Category'
,lgm.Domain0 as 'Domain or Local ComputerName, if Associated with this Account'
,lgm.Enabled0 as 'if local account, is it enabled'
,lgm.name0 as 'Name of the Local Group on this device'
,lgm.Type0 as 'Type of account according to Get-LocalGroupMember, PrincipalSource'
from v_GS_LocalGroupMembers0 lgm
join v_r_system s1 on s1.resourceid=lgm.ResourceID
Where lgm.Name0 = 'Administrators'
and lgm.account0 not in ('Domain Admins')
and (lgm.Enabled0 = 'true' or lgm.Enabled0 is null)

Configuration Manager Versions Summary Report

I'm sure there are a dozen if not hundreds of blogs posts out there with this exact same information; just posting it mostly for myself.  If it helps someone else, great.  As of late July 2016, these are the versions (and their Marketing or public names) for the ConfigMgr Client.  It doesn't go back to SMS 1.0, or even cm07--so not as useful for "everything ever". 

But if you get "unknown" versions when you run it, you can fill in your own blanks.

SELECT COUNT(resourceID) [Count],
    Client_Version0 [Version]
, case
when client_version0 = '5.00.7711.0000' then 'ConfigMgr 2012 RTM'
when client_version0 = '5.00.7804.1000' then 'ConfigMgr 2012 SP1'
when client_version0 = '5.00.7804.1202' then 'ConfigMgr 2012 SP1 CU1'
when client_version0 = '5.00.7804.1300' then 'ConfigMgr 2012 SP1 CU2'
when client_version0 = '5.00.7804.1400' then 'ConfigMgr 2012 SP1 CU3'
when client_version0 = '5.00.7804.1500' then 'ConfigMgr 2012 SP1 CU4'
when client_version0 = '5.00.7804.1600' then 'ConfigMgr 2012 SP1 CU5'
when client_version0 = '5.00.7958.1000' then 'ConfigMgr 2012 R2'
when client_version0 = '5.00.7958.1060' then 'ConfigMgr 2012 R2 for Linux'
when client_version0 = '5.00.7958.1203' then 'ConfigMgr 2012 R2 CU1'
when client_version0 = '5.00.7958.1254' then 'ConfigMgr 2012 R2 CU1 for Linux'
when client_version0 = '5.00.7958.1303' then 'ConfigMgr 2012 R2 CU2'
when client_version0 = '5.00.7958.1401' then 'ConfigMgr 2012 R2 CU3'
when client_version0 = '5.00.7958.1501' then 'ConfigMgr 2012 R2 CU4'
when client_version0 = '5.00.7958.1604' then 'ConfigMgr 2012 R2 CU5'
when client_version0 = '5.00.8239.1000' then 'ConfigMgr 2012 R2 SP1'
when client_version0 = '5.00.8239.1203' then 'ConfigMgr 2012 R2 SP1 CU1'
when client_version0 = '5.00.8239.1301' then 'ConfigMgr 2012 R2 SP1 CU2'
when client_version0 = '5.00.8239.1403' then 'ConfigMgr 2012 R2 SP1 CU3'
when client_version0 = '5.00.8325.1000' then 'ConfigMgr 1511'
when client_version0 = '5.00.8355.1000' then 'ConfigMgr 1602'
when client_version0 = '5.00.8355.1001' then 'ConfigMgr 1602 with policyagentendpoint.dll update'
when client_version0 = '5.00.8355.1306' then 'ConfigMgr 1602 with KB3155482'
when client_version0 = '5.00.8355.1307' then 'ConfigMgr 1602 with KB3174008'
when client_version0 = '5.00.8412.1000' then 'ConfigMgr 1606 TAP'
when client_version0 = '5.00.8412.1006' then 'ConfigMgr 1606'
when client_version0 = '5.00.8412.1007' then 'ConfigMgr 1606 with KB3180992'
else 'unknown' end as [Marketing Version]
  FROM dbo.v_R_System_Valid
  GROUP BY
    Client_Version0
  ORDER BY [Version] desc

there's also this way...if you don't want to deal with all those pesky cumulative updates, or hotfixes

;with cte as (select resourceid, substring(client_version0,6,4) as [Ver] from v_r_system_valid)

select Ver,
Case
when Ver = '7711' then 'ConfigMgr 2012 RTM'
when Ver = '7958' then 'ConfigMgr 2012'
when Ver = '8239' then 'ConfigMgr 2012 R2'
when Ver = '8325' then 'ConfigMgr 1511'
when Ver = '8355' then 'ConfigMgr 1602'
when Ver = '8412' then 'ConfigMgr 1606'
else 'unknown'
 end as [Version]
,Count(resourceid) [Count]
from cte
group by ver
order by ver

Politely Schedule restarts of CCMExec Service

Over the years of troubleshooting the SCCM Client, even with the built-in CCMEval task to attempt to watch and remediate client health of the SCCM Client, experience has shown to those of us in the trenches that sometimes, despite everything else, simply restarting the SMS Agent Host (aka, ccmexec service) will clear previously inexplicable issues.  A service restart is often less disruptive to the end user than saying "have you tried a reboot yet".

If that scenario is something you've encountered in your environment, or you just want to be proactive (like some other companies) one way to accomplish an 'SMS Agent Host' restart is to ask the ccmeval task to do that for you.  Kent Agerlund very kindly shared with me the edits they've done for their customers; and by doing so on their customers it was determined that overall, issues were reduced with the sccm client.

I've taken his edits, and created a couple of Configuration Items.  It's the ccmeval.xml which indicates what tests should be run by the ccmeval scheduled task.  Two tasks are added to ccmeval.xml:
- Restart CCMExec.exe-Stop
- Restart CCMExec.exe-Start

There are two --> attached <-- Configuration items.  One is to modify the ccmeval.xml to add the stop/start actions.  The other is to return the ccmeval.xml back to the original values (as of version Current Branch 1806 clients, but the ccmeval.xml hasn't changed in years, so it is anticipated it won't change in future version... but nothing is certain). 

What you would do to test:

  • Create a Baseline, let's call it "CCMEval Add Action to Restart CCMEXEC".  Add ONLY the 1 Configuration Item, 'ccmeval.xml Add Service Restart', make it optional (not required).
  • Deploy that baseline to a collection of TEST computers; to run daily, make sure you check the box for Remediation (not just monitor).
  • On the client
    • after the baseline of "CCMEval Add Action to Restart CCMEXEC" has run, go look at ccmeval.xml (it's usually in %windir%\ccm folder); and you should see the new actions have been added.
    • if you are patient--wait overnight.  The next day check in %windir%\ccm, for ccmevalreport.xml  Open up that file and look for the actions of "Restart CCMExec.exe-Stop." and "Restart CCMExec.exe-Start."  and they should have resultcodes of 0 (success).  You might also want to take note of the time that ccmevalreport.xml was created.  and then go look at %windir%\ccm\logs, for example, ccmexec.log or clientidmanagerstartup.log for entries around that time--you should notice that the logs indicated a service restart.
    • if you are NOT patient... from cmd-prompt-as-admin, you can run ccmeval.exe from %windir%\ccm, and then look at the files and results as indicated above.

PARANOIA TESTING

  • Remove the Deployment of the baseline "CCMEval Add Action to Restart CCMEXEC" to your test collection.
  • Create a Baseline called... "CCMEval Return to original", and add just and only 'ccmeval.xml Return to Original', make it optional (not required).
  • Deploy the baseline to your collection of Test Computers, to run daily, make sure you check the box for Remediation (not just monitor)
  • Confirm the ccmeval.xml gets set back to no longer have to 2 additional tasks
  • Manually run ccmeval.exe after the xml is changed back, and/or wait overnight, to confirm that ccmeval runs, and no longer restarts the ccmexec service.
  • Remove the Deployment of the Baseline "CCMEval Return to Original" (hopefully you'll never need this again... but...)
  • Once you've satisfied yourself that you can not only modify the ccmeval.xml, but also return it to a pre-changed condition, then you will be confident (hopefully) to move forward.

Your next step (if you choose to go forward) is to deploy the "CCMEval Add Action to Restart CCMEXEC" to a collection of targets.

One thought...I personally would not deploy the xml change to any Server OS, and definitely not any of my SCCM Servers--because the Management Point processes use ccmexec.  Restarting ccmexec on a Management Point role server might be fine... only you can say what makes sense in your infrastructure.  If you restart SMS Agent Host on your Management Point Role servers outside of a reboot, what does that impact for you?  anything?  if no impacts, then sure.  But YOU need to test, test, test. 

You may be asking yourself why this blog article was titled 'politely'... that's because the ccmeval scheduled task is designed to only run when the client isn't doing other important things, and the system is quiet.  By design ccmeval tries to be quiet and discreet about when it runs, and randomized.

Visual Studio 2017 Editions using ConfigMgr Configuration item

This is a companion to https://mnscug.org/blogs/sherry-kissinger/416-visual-studio-editions-via-configmgr-mof-edit It *might* be a replacement for the previous mof edit; but I haven't tested this enough to make that conclusion--test yourself to see.

Issue to be resolved:  there are licensing groups at my company who are tasked with ensuring licensing compliance.  There is a significant difference between Visual Studio costs for Standard, Professional, and Enterprise.  Prior to Visual Studio 2017, that information was able to be obtained via registry keys, and a configuration.mof + import (see link above) was sufficient to obtain that information.

According to https://blogs.msdn.microsoft.com/dmx/2017/06/13/how-to-get-visual-studio-2017-version-number-and-edition/ (looks like published date is June, 2017), that information is no longer in the registry.  There is a uservoice published --> https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/19026784-please-add-a-documentation-about-how-to-detect-in <--, requesting that the devs for visual studio put that back--but there's no acknowledgement that it would ever happen.

So that means that us lonely SCCM Administrators, tasked with "somehow" getting the edition information to the licensing teams at our companies have to--yet again--find a way to "make it happen", using the tools provided.  So here's "one possible way". 

This has only been tested on ONE device in a lab... so it's probably not perfect.  Supposedly, using the -legacy switch it'll also detect "old versions" installed--but I have no idea if that works or not.  Might not.

Here's how I plan on deploying this...

1)  configuration Item, Application Type.
    a) 'Detection Method", use a powershell script... this may not be universal, but currently in my lab, this location of 'vswhere.exe' is consistently in the same place.  Here's hoping it'll not change.  So the detection logic for the CI to bother to run at all would be "do you have vswhere.exe where I think it should be":

 $ErrorActionPreference = 'SilentlyContinue'
 $location = ${env:ProgramFiles(x86)}+'\Microsoft Visual Studio\Installer\vswhere.exe'
 if ([System.IO.File]::Exists($location)) {
  write-host $location
  }

    b) Setting, Discovery Script, see the --> attached <-- .ps1 file.  Compliance Rule would be just existential, any result at all.
2)  Deploy that CI in a Baseline, as 'optional'; whether or not I just send it to every box everywhere, or create a collection of machines with Visual Studio 2017 in installed software--either way should work.
3)  Once Deployed and a box with Visual Studio 2017 has run it, confirm that a sample box DOES create a root\cimv2, cm_vswhere class, and there is data inside.
4)  Enable inventory
    a) In my SCCM Console, Administration, Client Settings, right-click Default Client Settings, properties
    b) Hardware Inventory, Set Classes...
    c) Add...
    d) Connect... to "the computer you checked in step 3 above; where you confirmed there is data locally on that box in root \cimv2, cm_vswhere"  and root\cimv2
    e) find the class "cm_vswhere"  check the box, OK. OK. OK.
5) monitor
    a) on your primary site, <installed location for SCCM>\Logs, dataldr.log 
    b) It'll chat about pending adds in the log.  Once that's done, you'll see a note about how it made some views for you.  "Creating view for..."
6) Wait a day, and then look if there is any information in a view probably called something like... v_gs_cm_vswhere.  But your view might have a different name--you'll just have to look.
    a) if you're impatient, back on that box from step 3 above, do some policy refreshes.  then a hardware inventory.
5) End result, you should get information in the field "displayName0", like "Visual Studio Professional 2017", and you'll be able to make custom reports using that information.  Which should hopefully satisfy your licensing folks.

To reiterate... tested on ONE box in a lab.  Your mileage my vary.  Additional tweaks or customizations may be needed to the script.  That's why in the script I tried to add a bunch of 'write-verbose'.  If you need to figure out why something isn't working right, change the VerbosePreference to Continue, not SilentlyContinue, and run it interactively on a machine--to hopefully figure out and address any un-anticipated flaws.

Windows 10 Inplace Update History Inventory

We were tasked at our company to get some statistics around machines which went through inplace upgrades, vs. machines which were on an 'original image' (or bare metal image, or whatever phrase you would like to give that).

With the assistance of --> Gary Blok <-- he suggested using the subkeys in the registry under 'HKLM:\System\Setup', which start with "Source OS..."  Of course the problem was each and every computer which would go through an upgrade, would have a different key name.  That meant that in order to inventory that information, it would need to be a script.

Sample output; for example...what does it look like to see 'history' of machines which have gone through an upgrade? (and reported back the info from this script + mof edit)




select s1.netbios_name0 as 'ComputerName',
so.CurrentBuild0 as 'CurrentBuild'
,so.EditionID0 as 'EditionID'
,so.InstallDate0 as 'InstallDate'
,so.LatestOS0 as 'LatestOS'
,so.PathName0 as 'PathName'
,so.ReleaseID0 as 'ReleaseID'
,so.UBR0 as 'UBR'
from v_GS_SourceOS0 so
join v_r_system s1 on s1.resourceid=so.resourceid
Where so.resourceid in (
       select so.ResourceID
       from v_GS_SourceOS0 so
       Group by so.resourceid
       HAVING count(so.resourceid) > 1
       )
order by s1.netbios_name0, so.LatestOS0 desc, so.InstallDate0 desc

Sample output; for example...if someone were to ask you... Despite machines going to several upgrades, when was the machine originally imaged with a base image?



select s1.netbios_name0 as 'ComputerName', Min(InstallDate0) as 'Install Date'
from v_GS_SourceOS0 so
join v_r_system s1 on s1.resourceid=so.resourceid
Where s1.netbios_name0 = 'Computer1'
group by s1.netbios_name0


Attached --> Here<-- is a .zip file, containing three things.
- a .cab file if you just want to import the Configuration Item into your SCCM console
- 'SourceOS-IntoWMI.RenametoPS1.txt', the powershell script which is inside that CI.  Sometimes for some unknown reason, people are unable to import a .cab file into their environment.  You could create a new CI in your environment, using that .ps1 information, and the "what means compliant" would simply be existential, that any value is returned at all.
- 'SourceOS.mof' (for inventory import)

If you think this might be interesting to implement in your environment; here's the steps.

1) Either import the .cab file into your Configuration Items in your CM console
2) If importing of the .cab doesn't work, instead create a new Configuration Item, call it whatever you like, and the Setting will be a script type, string.  Paste in the contents of the 'RenametoPS1...' file.  the CI "test for compliance' will be Existential, that any value is returned at all.  
3) Add that CI to a Baseline of your choosing, and deploy the baseline to the machines you want to report on this information.  Perhaps it's only your Windows 10 devices.  I'd suggest having the baseline re-run 'every 7 days'--it really doesn't need to be more frequently, unless you have a lot of people hovering over your shoulder needing this info 'yesterday, and then every day forever'.  Frequency is up to you, but certainly not more frequently than daily.  More frequently than daily for this is overkill.
4) In your Console, Administration, Client Settings, Default Client Settings, right-click properties, Hardware Inventory, import the "SourceOS.mof".  Monitor your server's dataldr.log to confirm all is well.
5)  Wait.  Wait some more.  Wait a bit longer than that, up to a week.  <grin>  what you're waiting for is

a) the clients who got the baseline to run the baseline, and locally populate the completely-custom-wmi class of root\cimv2\cm_SourceOS. 

b) Then you're waiting for machines which have done that, to submit hardware inventory with that new, custom information.  Depending upon your environment, this could be hours... to days/over a week.  It all depends upon your environment, there is no one size fits all answer for how long you need to wait for most targets to report this information. 

c) After you've waited a bit, try out one or both of the sql queries above, to see if you have information.

Notes: in the 'SourceOS.mof'; I only set some of the values to TRUE for reporting.  If you think some of the other values which could be reported would be interesting for you to have, simply enable them in your console, so the clients start reporting those additional values.

Copyright © 2021 - The Twin Cities Systems Management User Group