Need to enable IPv6 and apply static addresses?

Are you ready for IPv6 and have a need to enable and apply IPv6 static addresses to a list of devices?  We have recently been tasked to do just that on our servers, since we now have our infrastructure and applications ready for IPv6 and can now take full advantage of all the benefits it brings.   But the challenge is, we have close to a hundred servers in our department alone, and I was not about to login to each server and get carpal tunnel from clicking and typing around...  So, I wrote a simple script that would help us, and hopefully it will help others too :).

The script is fairly simple...  It remotely enables IPv6 (if it isn't already) and apply the given IPv6 static addresses you assign accordingly based on the list you specify in the CSV format.   If the target device has multiple NICs, it only assigns the static address to the given NIC name you provide in the param section (see $DedicatedNIC).  It also detects whether you have NIC teaming enabled and once again, it only applies the static address to the given name of that Teamed NIC you specify in the param section (see $DedicatedTeamNic).  Additionally, it creates a log for you in the same folder where the script is for monitoring the progress.

.USAGE
    Set-Static-IPv6.ps1 -CsvFile <path to the csv file> -PrefixLength <PrefixLength> -PreferredDNS <Primary IPV6DNS Address> -AlternateDNS <Alternate IPV6DNS Address>

    CSV file format:
        Name,IPAddv6,GateWay
        <Hostname1>,<IPV6Address>,<IPV6Gateway>
        <Hostname2>,<IPV6Address>,<IPV6Gateway>
 
NOTE:  The prefix length, the DNS addresses, and the NIC card(s) you're targeting are preset in the param section.  You can change them using the switches or they can also be moved to be part of the CSV, if need be.  Especially for the DNS addresses...  You may have multiple of them, but in our case we only have two, so presetting them in the param section made sense.
Oh, and use it at your own risk! :) :) :)
 
param(
 [string]$CsvFile,
    [string]$PrefixLength = "64",
    [string]$PreferredDNS = "xxxx:xxx:xxxx:xxxx::1111",
    [string]$AlternateDNS = "xxxx:xxx:xxxx:xxxx::1112",
    [string]$DedicatedTeamNIC = "Team",
    [string]$DedicatedNIC = "Ethernet"
)
 

- --> Click HERE to download the script. <---

Congrats on being IPv6 ready!  :) :) :)

  • Created on .

Update_ComplianceStatus report inconsistencies on your CAS?

Do you have a CM hierarchy (CAS and Primary site(s)) deployed, and have inconsistencies with update compliance status reporting?   Normally, triggering a "RefreshServerComplanceState" on the troubled endpoints does the trick. (Which by the way, click here if you'd like to see my colleague Sherry Kissinger's awesome blog about making this automated in your environment by making it a CI/Baseline, which has worked great for us.) Which would “refresh” and send the actual patch compliance state of the troubled clients up the chain and fix the issue.   But what if this doesn’t work, or completely resetting the machine policy doesn’t cut it, and your patch reporting is still having discrepancies?   Which what we've run into in our environment just here recently. But how does this happen? When dealing with a hierarchy in a large environment, you’re bound to have network hiccups, storage issues, outages, DRS issues, data corruption, and etc… Which all could result into data and/or state inconsistencies. Nonetheless, here’s how we addressed the issue.

Determining the issue:

When querying the CAS database for the "required" patches that need to be applied on the client in question (Status 2 means Required/missing), using SQL query below…  

DECLARE @Resourceid int = (select resourceid from v_r_system_valid s1 where s1.netbios_name0 = '<ENTER CLIENT HOSTNAME HERE>')
If @ResourceID IS NOT NULL
BEGIN
-- get update info
DECLARE @Updates TABLE (CI_ID INT, BulletinID VARCHAR(64), ArticleID VARCHAR(64), Title VARCHAR(512), Description VARCHAR(3000), InfoURL VARCHAR(512), Severity INT,
  IsSuperseded INT, IsExpired INT, DateLastModified DATETIME);
INSERT INTO @Updates
SELECT  CI_ID, BulletinID, ArticleID, Title, Description, InfoURL, Severity,
  IsSuperseded, IsExpired, DateLastModified
FROM
  dbo.v_UpdateInfo ui
  Where Severity is not null
SELECT  upd.BulletinID, upd.ArticleID, ucs.Status, ucs.LastStatusChangeTime, ucs.LastStatusCheckTime,
  CASE WHEN ucs.Status IN(1,3) THEN 'GreenCheck' ELSE 'RedX' END AS StatusImage,
  case
  when upd.Severity = 0 then 'None Declared'
  when upd.Severity=2 then 'Low'
  when upd.Severity=6 then 'Moderate'
  when upd.Severity=8 then 'Important'
  when upd.Severity=10 then 'Critical'
  else cast(upd.Severity as nvarchar) end as 'Severity', upd.IsSuperseded, upd.Title,
  upd.Description,
  upd.InfoURL,
  upd.DateLastModified [Last Modified Date by Microsoft]
FROM
  @Updates upd
  JOIN dbo.v_Update_ComplianceStatusAll ucs
    ON upd.CI_ID = ucs.CI_ID
       AND ucs.ResourceID = @ResourceID
       AND ucs.Status =2  --(only required)
       AND upd.IsExpired = 0
ORDER BY
  upd.severity desc, upd.IsSuperseded, upd.DateLastModified desc, ArticleID desc
END

We would get about over 60 various patches including both Office 2013 and Office 2016 patches.  Um...  Hmm, the workstation doesn't even have Office 2013 installed!  Hmmm...

But then, when checking the client itself for missing patches, using the POSH cmd line below:

get-wmiobject -computername <HOSTNAME HERE> -query "SELECT * FROM CCM_UpdateStatus" -namespace "root\ccm\SoftwareUpdates\UpdatesStore" | Where-object { $_.Status -eq "Missing" } | select-object Title, Status

We'd only get about 19 patches that are required/missing.

How do we fix this?

As mentioned earlier, normally triggering the RefreshServerComplianceState at the device level, fixes the issue.

invoke-command -ComputerName "<HOSTNAME HERE>" -Scriptblock {$SCCMUpdatesStore = New-Object -ComObject Microsoft.CCM.UpdatesStore; $SCCMUpdatesStore.RefreshServerComplianceState()}

But this doesn’t appear to have worked in our case… We were still having inconsistencies even after executing this option. Thus, after looking at the clients extensively, it turns out that the clients are clean, healthy, scanning properly, and reporting inventory up the chain without any issues. We then checked the Primary sites’ databases by using the SQL same query above to check our clients for their "Required" updates. And the results match exactly what the troubled clients have, for missing and required patches.  It appears that the CAS has more stale patch articles' state than the Primary site databases do, and there are definitely discrepancies between the two.  

So how do we fix this?   We opened up a case with Microsoft support, and they recommended to run the following against the CAS db in SSMS, which is an eyeopener for me... I had no idea we could initialize a synchronization JUST on a single article and not the entire replication group! NOTE: This would scare you, but it's really not.  I really thought it would trigger a replication for the entire General_Site_Data group, but no...  Just for the specific article we're having issues with.  And in fact, this processed REALLY fast for us.   Nonetheless, below is the magic wand :)  (Drum Roll....)  

EXEC spDrsSendSubscriptionInvalid '<CAS site Code>', '<Primary_Site_Code>', 'General_Site_Data', 'Update_ComplianceStatus'

This triggers a replication request from the CAS only for the "Update_ComplianceStatus" article from the target Primary site server.  Under the CAS’s rcmctrl.log, here's what you would start to see:

 

  At the Primary site level, it accepts the request from the CAS and processes it for Update_ComplianceStatus table (only), and it creates the cab file of data for it to be sent to the CAS. Below is shown in rcmctrl.log, along with the # of columns to be processed.

Once the Cab file is done being created in a cab format, it then sends it to the CAS for processing.   Monitor the sender.log at the Primary site server, if you’d like to see the progress.
Back at the CAS, once it receives the cab, it then processes it by removing the old data for Update_ComplianceStatus and replace it with new, along with the # of rows/records to be processed.

 During this stage, the CAS DRS status switches to 'Maintenance Mode" status, and a couple of replication groups may be degraded during this. To check the DRS status by running "exec spDiagDRS" in SSMS against the CAS db.

 

When the CAS finishes the processing of the bcp, the status is recoreded in rcmctrl.log as shown below.

 

 

Run "exec spDiagDRS" once again to check the overall DRS status of the hierarchy. The CAS should be off of the “Maintenance Mode” status at this point.
Now, try running the SQL query we use above to check for the client(s) patch status once again. Voila! The discrepancies that were there before should now be gone. Big thanks to our MS support folks for helping us resolve this issue!

 

DRS, SQL, Patch Reporting, Patch Compliance

  • Created on .

What's SUP???

What's SUP???

(Updated: 8/20/2018)

What is up with SUPs???   (SUPs – Software Update Point servers…  You know, the only antiquated server role in your CM hierarchy. :))  We’ve had so many SUP storms in our organization I have seriously lost count…  This is where WSUSpools are just severely getting hammered constantly on our SUPs… CPU and RAM are thru the roof, clients are constantly generating errors or timing out, and network consumption on our low bandwidth sites are at capacity.  This recent one, was more than a storm.  It literally halted our users from working at the branch sites, due to high network consumption from all of the scanning and rescanning that were coming from the clients.  Maybe storm is not the word, it was a hurricane!  We’ve always thought running the default WSUS maintenance on our SUPs periodically thru POSH cmdlets is enough.   But something is off, clearly…  This has always left us scratching our heads trying to figure out exactly what’s going on with our SUPs.  We’re constantly searching for answers on how to tame our SUPs down, and constantly adjusting the pool and IIS settings.   (Which I believe we’ve got it right this time.  Check out my peer’s blog Sherry Kissinger regarding WSUSPool, web.config, and CI settings.).   But this time around, it seems the clients are just not completely downloading all of the metadata...

So we reached out to our dedicated MS support folks (which btw are awesome), and worked with them closely on figuring out exactly what’s going on with our WSUS environment.  We wanted to know if there’s a way to identify and measure the metadata that the clients are downloading, and they gave us the SQL below to run against the SUSDBs.  This tells us articles the are deployable and the size of each article.  And the recommendation was to go straight to WSUS console and decline the updates with large metadata that we’re not using.  Hmm, we thought that could be a lot!  Since we had never ever gone in WSUS console for anything!  Who does, right?  Since that’s always been the rule, never mess with WSUS Console.  NOT this time.

Run this SQL (from MS support) against your SUSDB to view all of the deployable updates you have.  (This was separated into two queries, but Sherry put it together).

 

;with cte as (SELECT dbo.tbXml.RevisionID, ISNULL(datalength(dbo.tbXml.RootElementXmlCompressed), 0) as LENGTH FROM dbo.tbXml

INNER  JOIN dbo.tbProperty ON dbo.tbXml.RevisionID = dbo.tbProperty.RevisionID

)--order by Length desc

select

  u.UpdateID,

  cte.LENGTH,

  r.RevisionNumber,

  r.RevisionID,

  lp.Title,

  pr.ExplicitlyDeployable as ED,

  pr.UpdateType,

  pr.CreationDate

 from

  tbUpdate u

  inner join tbRevision r on u.LocalUpdateID = r.LocalUpdateID

  inner join tbProperty pr on pr.RevisionID = r.RevisionID

  inner join cte on cte.revisionid = r.revisionid

  inner join tbLocalizedPropertyForRevision lpr on r.RevisionID = lpr.RevisionID

  inner join tbLocalizedProperty lp on lpr.LocalizedPropertyID = lp.LocalizedPropertyID

 where

  lpr.LanguageID = 1033

  and r.RevisionID in (

select

  t1.RevisionID

from

  tbBundleAll t1

  inner join tbBundleAtLeastOne t2 on t1.BundledID=t2.BundledID

where

  ishidden=0 and  pr.ExplicitlyDeployable=1)

order by cte.length desc

 

 

Once we’ve gotten the number of the articles that we had as “deployable”, we noticed that there were tons that updates that we were not using or have never really used.   So clients were clearly downloading/scanning for all of these unnecessary articles, hence why we were seeing a lot of timeouts.  Thus, cleaning up is what we needed to do by declining all of these updates in WSUS in attempt to make the metadata lean.

Meghan Stewart from MS has a really great guide for maintaining WSUS/Software Update Points, (which i strongly recommend you follow).  I grabbed the script from her post, and enhanced it a little by adding functions for declining Itanium, Windows XP updates, IE, Embedded, etc...   And more optional functions added recently (see below for details).  For not only did we need to decline superseded ones, but we also needed to decline unused and unnecessary updates that are lingering around for no reason other than consuming space and network bandwidth during client scanning.  And we had to find a way to automate this process so we could include it in our maintenance plan.  Lastly, I added email reporting (new) along with event logging since we need SCOM to be able to pick up those errors/events so we can be alerted upon failures.  So prior to actually declining all of these unnecessary updates, we had over 14k articles that were marked as deployable.  After running the script, we now have about less than 5k.  HUGE chunk was taken off, and this obviously made the scanning times MUCH faster, timeouts when away, and network bandwidth consumption dropped significantly at no time.  Script download link below.

 

LATEST UPDATE:  (8/20/2018) Added the following (NEW!!!)

  • Fixed the missing comma in one of the paramaters (Thanks Johan for pointing that out!)
  • Improved/Updates OS filtering, to only allow decline of targetted OS/Updates.
  • Added Decline updates for the following: Windows 7, Windows 8, Windows 8.1, Windows Server 2003, Windows Server 08, Windows Server 08 R2, Windows Server 12, and Windows Server 12 R2.

 

UPDATE:  (5/10/2018) Added the following

  • ARM64 based
  • Internet Explorer 10
  • Added Maintenance option for the UpdateList folder to prevent buildup.  ($CleanUpdatelist = $true by default.  Will delete files older than 90 days, which can be specified in the param section, $CleanULNumber).
  • Applied a fix where the script would continue to try to decline updates even after failure when querying for updates on target WSUS/SUP server.  The script will now stop when that occurs.  Of course, you'll be alerted if/when this happens.
  • More performance improvements

 

UPDATE:  (4/25/2018) Added the following

  • Windows 10 Next
  • Server Next
  • Email Report option (Set to $false by default)
  • Performance improvements

 

UPDATE:  (4/13/2018) On top of being able to decline superseded, Itanium, and XP updates, you can now also decline the following updates:

  • Preview
  • Beta
  • Internet Explorer 7, 8, and/or 9
  • Embedded

 

Run-DeclineUpdate-Cleanup Script V5 (UPDATED) <-- Download Link

 

Here’s what the script does:

  1. Decline superseded updates. (# of days can be specified by using the –ExclusionPeriod)
  2. Decline Itanium updates. (can be omitted by using the –SkipItanium switch)
  3. Decline Windows XP updates. (can be omitted by using –SkipXP switch)
  4. Decline Preview updates. (can be omitted by using –SkipPrev switch)
  5. Decline Beta updates. (can be omitted by using –SkipBeta switch)
  6. Decline Windows 10 Next Updates. (can be omitted by using –SkipWin10Next switch) 
  7. Decline Server Next Updates. (can be omitted by using –SkipServerNext switch)
  8. Decline ARM64 based Updates. (can be omitted by using –SkipServerNext switch) 
  9. Decline Windows 7 Next Updates. (can be omitted by using –SkipWin7 switch, $true by default) NEW!!
  10. Decline Windows 8 Next Updates. (can be omitted by using –SkipWin8 switch, $true by default) NEW!!
  11. Decline Windows 8.1 Next Updates. (can be omitted by using –SkipWin81 switch, $true by default) NEW!!
  12. Decline Windows Server 2003 Next Updates. (can be omitted by using –SkipWin2k3 switch, $true by default) NEW!!
  13. Decline Windows Server 2008 Next Updates. (can be omitted by using –SkipWin2k8 switch, $true by default) NEW!!
  14. Decline Windows Server 2008 R2. (can be omitted by using –SkipWin2k8R2, $true by default) switch) NEW!!
  15. Decline Windows Server 2012. (can be omitted by using –SkipWin12 switch, $true by default) NEW!!
  16. Decline Windows Server 2012 R2 (can be omitted by using –SkipWin12R2 switch, $true by default) NEW!!
  17. Decline IE 7 updates. (can be omitted by using –SkipIE7 switch)
  18. Decline IE 8 updates. (can be omitted by using –SkipIE8 switch, $true by default)
  19. Decline IE 9 updates. (can be omitted by using –SkipIE9 switch, $true by default)
  20. Decline IE 10 updates. (can be omitted by using –SkipIE10 switch, $true by default) 
  21. Decline Embedded updates. (can be omitted by using –SkipEmbedded switch)
  22. Can be run with –TrialRun which only records what you can decline (I highly recommend running this first. And examine the data in the “UpdatesList” folder it creates)
  23. It creates event logs for success/failure of the script or failure during decline process.
  24. Cleans UpdateList folder.  It deletes files/folders that are older than x days.  (can be turned off/on by -CleanUpdateList switch.  It is set to $true by default, and set to 90 by default.  Check $CleanULNumber in param) NEW!!!
  25. Sends Email Report (Optional, see below for screenshot) 

NOTE: I strongly recommend running this with -TrialRun switch first, and evaluate what it would decline by reviewing the htm and csv files it creates under "UpdateList" folder.  See the comment section in the Script for more details.

Requirement: Must have WSUS Console where the script is being executed on.  If CAS is in place, downlevel servers MUST run the script first, then the upstream or Top SUP server for declining updates.

This script can be run against individual WSUS/SUP server, or a line of WSUS servers.   For running against individual server, just use the –Servers <WSUSServer>.  If you have a CAS, this script must be run on the lower tier SUPs first, then run on the top.  This can also be automated!  

For automating it with CAS and Child sites (Using task Scheduler)

1. Modify the script and adjust the $Servers parameter (lower tier SUPs to run first, then the top SUP server).  NOTE: If SUSDB is shared, it only needs to run on one SUP.

$Servers = @("<lowerSUPServer1>","<lowerSUPServer2>","<CASLevelSUPServer>")

2. Pick a server with WSUS Console installed to run this on (we run this on our Top SUP, since WSUS console is already on it.)

3. Add this server to all WSUS/SUP servers’ local\admins group.

4. Give the server the appropriate access to the SUSDB

5. On this server, make a Task Schedule, and define the schedule accordingly to fit your need (Recommendation is to run it monthly to keep the metadata lean)

6. Add a program using the following settings:

Program/Script: %SystemRoot%\syswow64\WindowsPowerShell\v1.0\powershell.exe

Add arguments: C:\APPSFOLDER\Run-DeclineUpdate-Cleanup.ps1

Start in: C:\APPSFOLDER

Voila!  Automated.  And all you need to is review the results periodically, if necessary. 

Sample Email Report (NEW!!!)

 

Again, follow the basic WSUS maintenance from Meghan's post, look at your WSUSpool/web.config settings and consider the settings in Sherry's blog (working great for us), and decline superseded updates and updates on WSUS servers that are no longer being used.

That is what's SUP!!! 

  • Created on .

POSH for quickly applying Bandwidth Limitation on SUP Servers

For large organizations, having your CM servers including SUPs at a central location with high bandwidth connection is the optimum design.   However, they (SUPs) may sometimes cause network saturation within the remote branches or pockets with low bandwidth connectivity.   This happens when clients either move from one site to another (requesting a full scan), or when they fail repeatedly and try to connect to another SUP from the SUP list within your site and perform a full scan.   The way to safely control this is to leverage QOs within your environment.   But if QOs is not available, to quickly get this under control is to temporarily apply bandwidth limitation on your SUPs servers until the full scan jobs are gone.   Here's a quick POSH to remotely apply limitation on your WSUS Administration web sites (SUPs).

The default value of maxBandwidth is 4294967295.   Here below, i'm setting the SUPs to a really low value, 100k.   Play around with that number and see what the acceptible value is for your environment to avoid scan failures at the same time being network friendly at the time of the saturation.  

invoke-command -computername (Get-Content "C:\SUPServerList.txt") -scriptblock {Import-module WebAdministration ; Set-ItemProperty 'IIS:\Sites\WSUS Administration' -name limits.maxBandwidth 100000}

 

  • Created on .

BEWARE: Couple of issues after Upgrading a CM12 Primary site to Windows Server 2012 R2

After upgrading your CM12 Primary site(s) to Windows Server 2012 R2, you may experience the following issues.

1.  You may not be able to access the console after the upgrade.  Check SMS Admins permission to the Primary site's WMI's root/SMS and root/SMS/Site_XXX.

SMS Admins group should have the following:

    • Root/SMS
      • Enable Account
      • Remote Enable
    • Root/SMS/Site_XXX
      • Executable Methods
      • Provider Write
      • Enable Account
      • Remote Enable

 

2.  Your MPs may experience issues moving files to its Primary site’s inbox folders after upgrading your Primary site to Server12 R2.   Overtime, if this is unnoticed, you would see your clients become inactive in the console.   You may see similar errors below in your MPFDM.log.

mpfdm

 

The only fix we have found so far that’s effective is to add the MPs in local\admins group of that Primary site that’s been upgraded.

 

CMCB

  • Created on .
Copyright © 2019 - The Twin Cities Systems Management User Group