Application Deployment options for all App Deployments

Have you ever wondered if you could get a report of all your Application Deployments' options?  The ones which are in the GUI for things like "User Experience, Show a dialog window instead of a toast", or "Deployment Settings, Send wake-up packets".  No?  Well, I did.  So with the help of my good friend John Nelson, attached is the SQL to accomplish that.  Below is a (very badly displayed, sorry) results of a query in my lab--where I only have two fake test deployments, where I was testing that all the values were getting reported properly.  "It works in my lab".

Attached --> here<-- is the .sql itself, or it's below.  In theory, I thought this would be helpful for finding if you wanted to be sure everything was designed to "show the popup diag instead of toast"--you could easily filter and sort and see what deployments might not have a setting you wanted it to have.

--###############################################
--Cleanup any accidentally left behind Temp Tables
--###############################################

 

If(OBJECT_ID('tempdb..#TempDeplInfoBase') Is Not Null)
Begin
 Drop Table #TempDeplInfoBase
End

 

create table #TempDeplInfoBase(
AssignmentID int,
Assignment_UniqueID nvarchar(max),
AssignmentEnabled int,
AssignmentName nvarchar(max),
CollectionName nvarchar(max),
CollectionID nvarchar(8),
InstallorUninstall nvarchar(25),
OptionalOrRequired nvarchar(25),
WOLEnabled int,
DPLocality int,
StartTime DateTime,
EnforcementDeadline DateTime,
TimeType nvarchar(25),
SoftDeadline int,
OverrideServiceWindows int,
RebootOutsideOfServiceWindows int,
WriteFilter int,
RandomizationEnabled int,
RandomizationMinutes int,
UseBranchCache int,
EnableMomAlerts int,
RaiseMomAlertsOnFailure int,
NotifyUser nvarchar(100),
PreDeploy int,
CloseDefinedRunningExes int,
AllowRepair int,
UseDialogNotToast int
)

 

INSERT INTO #TempDeplInfoBase (assignmentid,Assignment_UniqueID,AssignmentEnabled,AssignmentName,CollectionName,CollectionID,InstallorUninstall,OptionalOrRequired,WOLEnabled,
DPLocality,StartTime,EnforcementDeadline,TimeType,SoftDeadline,OverrideServiceWindows,RebootOutsideOfServiceWindows,WriteFilter,RandomizationEnabled,
RandomizationMinutes,UseBranchCache,EnableMomAlerts,RaiseMomAlertsOnFailure,NotifyUser,Predeploy,CloseDefinedRunningExes,AllowRepair,UseDialogNotToast)

 

Select
c.AssignmentID,
c.Assignment_UniqueID,
c.AssignmentEnabled,
c.AssignmentName,
c.CollectionName,
c.collectionid,
Case when c.DesiredConfigType = 1 then 'Install'
  when c.DesiredConfigType = 2 then 'Uninstall'
 else cast(c.DesiredConfigType as nvarchar)
end as 'InstallOrUninstall',
case when c.OfferTypeID = 2 then 'Available'
 when c.OfferTypeID = 0 then 'Required'
 else cast(C.OfferTypeID as nvarchar)
end as 'OptionalOrRequired',
c.WOLEnabled as 'Send Wake-up Packets',
Case when c.DPLocality > 80 then 1 else 0 end as 'Allow clients on a metered connection to dl content after deadline',
c.StartTime,
c.EnforcementDeadline,
case when c.UseGMTTimes=0 then 'Client Local Time' Else 'UTC Time' end as 'TimeType',
c.SoftDeadlineEnabled as 'Delay enforcement per user preferences, up to the grace period',
c.OverrideServiceWindows as 'Override Maintenance Window, for Installation',
c.RebootOutsideOfServiceWindows as 'Override Maintenance Window, for System Restart',
c.PersistOnWriteFilterDevices as 'write-filter handling, Commit Changes at deadline for Windows Embedded devices',
c.RandomizationEnabled,
c.RandomizationMinutes,
c.UseBranchCache,
c.DisableMomAlerts as 'Enable SCOM MM',
c.RaiseMomAlertsOnFailure as 'Generate SCOM Alert when failure',
case
 when c.NotifyUser=1 and c.UserUIExperience=1 and (32 & c.OfferFlags) = 32 then 'Use Dialog to NotifyUser at Available, and notify for reboot post-install'
 when c.NotifyUser=1 and c.UserUIExperience=1 and (32 & c.OfferFlags) <> 32 then 'Use Toast to NotifyUser at Available, and notify for reboot post-install'
 when c.NotifyUser=0 and c.UserUIExperience=1 then 'Suppress User at Available, notify if reboot post-install'
 when c.NotifyUser=0 and c.UserUIExperience=0 then 'Suppress all User notifications'
end as 'NotifyUser',
Case when (1 & c.OfferFlags) = 1 then 1 else 0 end as 'PreDeploy' --'Pre-Deploy Software to the User Primary Device',
Case when (4 & c.OfferFlags) = 4 then 1 else 0 end as 'CloseDefinedRunningExes' --Automatically close any running executables you specified on the install behavior tab of the deployment type properties,
Case when (8 & c.OfferFlags) = 8 then 1 else 0 end as 'AllowRepair' --Allow End users to Attempt to repair the application,
Case when (32 & c.OfferFlags) = 32 then 1 else 0 end as 'UseDialogNotToast' --When software changes are required, show a dialog window to the user instead of a toast notification

from

v_CIAssignment c where c.AssignmentType=2

 

;WITH
PCT9 AS (
  SELECT
  RawTypeID,
  TypeInstanceID,
  SkipUntil,
  ParameterValues.value('(/Parameters/Parameter[@index=3])[1]','integer') AS PCT
FROM
  v_Alert
WHERE
RawTypeID = 9
),

PCT10 AS (
  SELECT
  RawTypeID,
  TypeInstanceID,
  SkipUntil,
  ParameterValues.value('(/Parameters/Parameter[@index=3])[1]','integer') AS PCT
FROM
  v_Alert
WHERE
RawTypeID = 10
)

 

select Distinct
t1.AssignmentID,t1.AssignmentEnabled,t1.CollectionName,t1.CollectionID,t1.InstallOrUninstall,
t1.AssignmentName,t1.OptionalOrRequired,t1.WOLEnabled as 'Send Wake-up Packets',
t1.DPLocality as 'Allow clients on a metered connection to Download content after deadline',
t1.StartTime as 'DeploymentAvailableTime',t1.EnforcementDeadline,t1.TimeType,
t1.SoftDeadline as 'Delay enforcement per user preferences, up to the grace period',
t1.OverrideServiceWindows as 'Override Maintenance Window, for Installation',
t1.RebootOutsideOfServiceWindows as 'Override Maintenance Window, for System Restart',
t1.WriteFilter as 'write-filter handling, Commit Changes at deadline for Windows Embedded devices',
t1.RandomizationEnabled,t1.RandomizationMinutes,t1.UseBranchCache,
t1.EnableMomAlerts,t1.RaiseMomAlertsOnFailure,t1.NotifyUser,
t1.PreDeploy as 'Pre-Deploy Software to the User Primary Device',
t1.CloseDefinedRunningExes as 'Automatically close any running executables you specified on the install behavior tab of the deployment type properties',
t1.AllowRepair as 'Allow End users to Attempt to repair the application',
t1.UseDialogNotToast as 'When software changes are required, show a dialog window to the user instead of a toast notification',
COALESCE(PCT9.SkipUntil,PCT10.SkipUntil) AS 'CM Alert if Success SkipUntil Date',
PCT9.PCT AS 'CM Alert if Success Rate Percentage Less than this after the SkipUntil Date',
PCT10.PCT AS 'CM Alert if Failure Rate Higher than this percentage'
from #TempDeplInfoBase t1
LEFT JOIN PCT9 ON t1.Assignment_UniqueID = PCT9.TypeInstanceID
LEFT JOIN PCT10 ON t1.Assignment_UniqueID = PCT10.TypeInstanceID

--###############################################
--Cleanup any accidentally left behind Temp Tables
--###############################################

If(OBJECT_ID('tempdb..#TempDeplInfoBase') Is Not Null)
Begin
  Drop Table #TempDeplInfoBase
End

 

Configmgr 2012 Truncate History Tables

Updated 2019-04-29, with DRS replication in ConfigMgr Current Branch, global data tables might be hit with this _hist truncate, the sql below has been modified to exclude global data types, so that your replication (for example, in rcmctrl.log) doesn't go crazy about having to re-initialize any data.  Thanks very much to Umair Khan, Twitter @TheFrankUK, for the assist!

Have you ever noticed, being the extreme ConfigMgr geek that you are, that you have v_gs and v_hs views?  Which point back to current, and History tables in your database.

Have you ever, and I mean EVER, needed to reference anything in the v_hs views?  Ever?  If you have, then perhaps this isn't for you.  If you've never used the data in the history views... why are you keeping it?  Sure, there are Maintenance Tasks you can tweak to help keep that data down, but... there is a quick (not supported) way to clean that up.

Keeping in mind this is NOT SUPPORTED (but it works anyway), so do this at your own risk, etc. etc.  If you mess up, I don't support you.  Microsoft won't support you.  You have a backup of your database, right?

On your Primary Site (even if you have a CAS, you still do this at your primary sites), all of this is done in SQL, the console is not involved at all.

Take the below, and in SQL management Studio, just take a look at how much History data you have.  Only you can determine if that's cause for concern, and you want to automate cleaning that up using a SQL Truncate process.  At my company, in the 12+ years that people on this team have been supporting SMS, then ConfigMgr...no one ever needed data in the History tables.  So...for us this was a lot of space gained, that didn't need to be backed up, and made nightly processing of some of the maintenance tasks that look at history tables finish MUCH faster than they have in months.

John Nelson (aka, Number2) would run the Truncate manually occasionally; but after a while that gets tedious.  :)  So he showed me how to see what is going to be truncated (query #1) and then how to make a Scheduled Job that runs daily, to actually do the Truncate of History tables.

Query #1: This particular query is only to look at what you have.  It does nothing but show you results.  Run this against all of your ConfigMgr sites with a CM_ database; and see if there is history you want to truncate.  If so, you may want to then move on to running SQL #2 (below).

SELECT
    t.NAME AS TableName,
    s.Name AS SchemaName,
    p.rows AS RowCounts,
    SUM(a.total_pages) * 8 AS TotalSpaceKB,
    SUM(a.used_pages) * 8 AS UsedSpaceKB,
    (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
FROM
    sys.tables t
INNER JOIN
          sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
    sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
    sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN
    ArticleData AD on AD.ArticleName = T.Name
WHERE
    t.NAME NOT LIKE 'dt%'
    AND t.NAME LIKE '%[_]HIST'
    AND t.is_ms_shipped = 0
    AND i.OBJECT_ID > 255
    AND AD.ReplicationID not in (Select ID from ReplicationData where ReplicationPattern = 'global')
GROUP BY
    t.Name, s.Name, p.Rows
ORDER BY
    rowcounts desc

SQL #2:  This will CREATE a job, with a daily schedule.  Before you run it, change CM_FUN to be your CM_<your Site Code> ; and you may also want to change
                @active_start_date=20140909
prior to running it, to whatever date you want the daily schedule to really start.  Once created, presuming SQL Server Agent is running, on that SQL server, for the Databse of CM_<whatever you put in>, it'll truncate your history tables on the schedule defined.

Optional:  After you've run the below, in your SQL Management Studio, Sql Server Agent, Jobs, if your right-click on the new job "ConfigMgr Truncate History Tables", you can select "Start Job at Step..." to have the job run RIGHT now; to confirm it works.  Once it's done, you can re-run query #1 above and see that it's clean(er).  Note that as machines report inventory, data will go into the history tables frequently.  You may already have new rows after you just ran the Truncate job, but it should be much less than it was.

Optional:  The next day, or weekly, or monthly...whatever schedule you have internally for checking up on your ConfigMgr infrastructure, every once in a while, run Query #1 above; and/or every once in a while, in SQL go to SQL Server Agent, Jobs, right-click on the Configmgr Truncate History Tables job, and select "View History", to see that the job was successful.

USE [msdb]
GO /****** Object:  Job [ConfigMgr Truncate History Tables]    Script Date: 9/8/2014 2:05:50 PM ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object:  JobCategory [[Uncategorized (Local)]]    Script Date: 9/8/2014 2:05:51 PM ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END
DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'ConfigMgr Truncate History Tables',
                @enabled=1,
                @notify_level_eventlog=0,
                @notify_level_email=0,
                @notify_level_netsend=0,
                @notify_level_page=0,
                @delete_level=0,
                @description=N'Truncate ConfigMgr database History tables',
                @category_name=N'[Uncategorized (Local)]',
                @owner_login_name=N'NT AUTHORITY\SYSTEM', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [Truncate]    Script Date: 9/8/2014 2:05:51 PM ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @[email protected], @step_name=N'Truncate',
                 @step_id=1,
                @cmdexec_success_code=0,
                @on_success_action=1,
                @on_success_step_id=0,
                @on_fail_action=2,
                @on_fail_step_id=0,
                @retry_attempts=0,
                @retry_interval=0,
                @os_run_priority=0, @subsystem=N'TSQL',
                @command=N'USE [CM_FUN]
GO
DECLARE @SQL NVARCHAR(MAX) = N''
''
SELECT
  @SQL = @SQL+N''TRUNCATE TABLE dbo.''+TABLE_NAME+'';
''  
FROM
  INFORMATION_SCHEMA.TABLES x
  INNER JOIN ArticleData A on x.TABLE_Name = A.ArticleName
WHERE
  x.TABLE_SCHEMA = ''dbo''
  AND x.TABLE_NAME LIKE ''%[_]HIST''
  AND A.ReplicationID not in (Select ID from ReplicationData where ReplicationPattern = ''global'')
ORDER BY
  x.TABLE_NAME
exec sp_executesql @SQL
',
                @database_name=N'CM_FUN',
                @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @[email protected], @name=N'ConfigMgr Truncate Hsitory',
                @enabled=1,
                @freq_type=4,
                @freq_interval=1,
                @freq_subday_type=1,
                @freq_subday_interval=0,
                @freq_relative_interval=0,
                @freq_recurrence_factor=0,
                @active_start_date=20140908,
                @active_end_date=99991231,
                @active_start_time=231100,
                @active_end_time=235959,
                @schedule_uid=N'9936718a-af85-497b-ac0d-d47d91ce99d8'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTIONGOTO End
Save QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

 

 

ConfigMgr MaxExecutionTime Guesses for Updates

There is a situation which MIGHT happen for you.  The default for Cumulative Updates is, I believe 60 minutes now.  But many updates are still defaulting to 10 minutes.  I don't personally think that default should change, however, occasionally there are large updates (think Microsoft Office updates) which might be several hundred GB in size, and might take more than 10 minutes to install.  In your reporting, and when looking at local logs, the CM client says the install "Failed", but all you do is a re-scan for updates, and CM says it's installed.  So what gives, you wonder?  Well, this could be a possible reason.  It's not that the install 'failed' per se.  But after 10 minutes, the CM client stopped 'watching for' the successful install.  It timed out kind of.  Since I noticed a pattern that "it's usually when those updates are ginormous, they take longer to install", below is a POSSIBLE sql query to perhaps help you find and adjust the "Max Execution Timeout" on any individual updates.

A couple of pre-requisites.  Naturally, the content has to be downloaded. So if you run this 5 minutes after a "hotfix Tuesday" sync, it might not have much to say.  Because the content hasn't been downloaded to calculate "how big" any particular update is.  So you do have to wait until your content is downloaded to track these down.

Also note that I haven't created any kind of "powershell script" to automatically adjust the Max Execution Timeout.  This is just a report, and the admin would either posh-script changing each individual update, or use the console, find each update, right-click on it and in properties for that update, adjust up the max Execution Timeout to fit.

Also note these "suggestions" are just that, suggestions.  There is no right or wrong answer for how long Max Execution Timeout should be.  Leaving it all alone as-is with no changes from what you have will still work just fine.  One of the problems you may encounter might discourage you from touching or doing anything with this at all could be this following scenario...  Here's the scenario where following these suggestions would be a big bad horrible idea.  Let's say you allow your devices to have a service window every night for 4 hours.  Then you follow these suggestions, and for whatever reason, there were 8 different Office updates, and you changed them all from 10 minutes to 60 minutes each... for a total of 8 hours estimated time to install.  A client, when it gets the Software Update deployment, when it used to think "ok, these 8 will take me 80 minutes, I can do that in my 4 hour window, let's start!".  It'll start installing, and maybe it only gets 3 done... but it does get 3 done.  If you set them to 60 minutes each, the client might decide "wow, 8 hours... I can't do that in my service window... I'll just wait until I have 8+ hours to get this done".  and of course... it may never install any of them.  So be careful in deciding whether or not this is a potentially BAD idea, for your environment.  Or at least be aware of the potential repercussions, so you know what to un-do.

What this sql does, is list for Updates released in the last 30 days, and content has been downloaded, kind of look at the maxexecutiontime set, vs. how big the content is.  and if, for example, the content size is between 50 and 100mb, but it's maxexecutiontime isn't 20 minutes or more, then maybe you the admin might want to think about making MaxExecutionTime on that specific update to be 20 minutes--so you don't get false "I failed to install" reports which a re-scan will address.

Again... this isn't perfect.  It's just a possible suggestion, if you maybe have seen this behavior in your Software Updates deployments, and were wondering if there was a way to be pro-active about increasing the MaxExecutionTime without waiting for your reports to tell you the next day.

DECLARE @StartDate datetime = DateADD(Day, -30, GETDATE())
DECLARE @EndDate datetime = GetDate()

;with cte as (select ui.MaxExecutionTime/60 [Max ExecutionTime in Minutes], ui.articleid, ui.title, ui.DateLastModified, ui.DatePosted
,ui.IsSuperseded, ui.IsExpired
,(SUM(files.FileSize)/1024)/1 as [Size in KB]
,(SUM(files.FileSize)/1024/1024)/1 as [Size in MB]
from v_updateinfo ui
join v_UpdateContents content on content.CI_ID=ui.CI_ID
join vCI_ContentFiles files on files.Content_ID=content.Content_ID
where severity is not null
and content.ContentProvisioned = 1
and ui.dateposted between @StartDate and @EndDate
and ui.IsExpired = 0
group by ui.MaxExecutionTime, ui.articleid, ui.title, ui.DateLastModified, ui.dateposted, ui.IsSuperseded, ui.IsExpired
)

select
Case when cte.[Size in MB] < 50 and cte.[Max ExecutionTime in Minutes] >= 10 then 0
when cte.[Size in MB] BETWEEN 50 and 100 and cte.[Max ExecutionTime in Minutes] >= 20 then 0
when cte.[Size in MB] between 100 and 150 and cte.[Max ExecutionTime in Minutes] >= 30 then 0
when cte.[Size in MB] between 150 and 200 and cte.[Max ExecutionTime in Minutes] >= 40 then 0
when cte.[Size in MB] between 200 and 250 and cte.[Max ExecutionTime in Minutes] >= 50 then 0
when cte.[Size in MB] between 250 and 300 and cte.[Max ExecutionTime in Minutes] >= 60 then 0
when cte.[Size in MB] > 300 and cte.[Max ExecutionTime in Minutes] >=90 then 0
else 1
End as [Could use MaxExecutionTime Adjustment],
case when cte.[Size in MB] < 50 then '10 minutes'
when cte.[Size in MB] BETWEEN 50 and 100 then '20 minutes'
when cte.[Size in MB] between 100 and 150 then '30 minutes'
when cte.[Size in MB] between 150 and 200 then '40 minutes'
when cte.[Size in MB] between 200 and 250 then '50 minutes'
when cte.[Size in MB] between 250 and 300 then '60 minutes'
when cte.[Size in MB] > 300 then '90 minutes'
end as 'time to set'
, cte.*

from cte
order by [Could use MaxExecutionTime Adjustment] desc, [Time to set] desc

IE 11 Enterprise Mode Tracking - Part 1

Internet Explorer 11 comes with this great new feature, Enterprise Mode, and it saves the day when it comes to most browser related upgrade issues. At least in my case it did a really good job of mitigating them. Unfortunately for me the documentation didn't do a great job of showing me how to make sense of it all,  especially when it came to how I was supposed to collect all the data from the web endpoint in the 'enable' key. (https://msdn.microsoft.com/en-us/library/dn640699.aspx) Basically you can put a URL in the 'enable' registry key which instructs the browser to post some data to a URL when ever a user enables, or disables, enterprise mode. That is great! Well wait, what do we need to setup to collect the data again?

There are a few posts out in the interwebs that describe how to do this already.  Some people use a web page to write to a text file, and Microsoft even released a way for us to collect this data with ConfigMgr (https://msdn.microsoft.com/en-us/library/dn833204.aspx). Great problem solved!  Well unfortunately ConfigMgr may not be everywhere I need it, or maybe I don't want to collect all this data in my 'already too big for my own good' CM database.  I thought I would share what I have done to make my life a little easier when someone asks "who is using enterprise mode and for which URLs?".  So let's setup an IIS site with a page that collects the data and then writes the data into a database for us. Basically we setup IIS, setup the database, then drop an aspx file into our website to write the data to the database for us. **Small disclaimer here: I am a systems admin and not an application developer. So please test this, use the appropriate data locations, and please change the passwords.  Also, let me know what i messed up!**

First off, we need a web server. For the purpose of this article I setup Server 2012 R2 with SQL Express as a starting point. Next I ran the following commands to setup the IIS Role on the server.

DISM /online /enable-feature /featurename:NetFx4 /all
DISM /online /enable-feature /featurename:IIS-WebServerRole
DISM /online /enable-feature /featurename:IIS-ASPNET45 /all
DISM /online /enable-feature /featurename:WCF-HTTP-Activation

Next, lets make a directory for our IIS site and database, then create and configure the IIS site using the commands below.

::Make our web dir
mkdir c:\ie11em
::Make our SQL dir
mkdir c:\sql
::Stop the Default IIS site
c:\windows\system32\inetsrv\appcmd stop site /site.name:"Default Web Site"
::Create our IE11EM IIS site
c:\windows\system32\inetsrv\appcmd.exe add site /name:IE11EM /id:2 /physicalPath:c:\ie11em /bindings:http/*:80:
::Enable windows auth (so we can capture user names)
c:\windows\system32\inetsrv\appcmd set config /section:windowsAuthentication /enabled:true
::Disabled Anonymous Auth
c:\windows\system32\inetsrv\appcmd set config /section:anonymousAuthentication /enabled:false

Now to setup the database, lets open up SQL Management Studio and run the following script.  This will also create the SQL auth account for the webpage to write to the database.  You will need to enable SQL auth on your SQL instance prior to running this, in SQL Mgmt Studio right click on your server name and select properties.  Then select the Security node, and set the Server Authentication option to 'SQL Server and Windows Authentication mode'.  (aka Mixed Mode)  Then restart you sql instance for the changes to take effect.

/************************** Start IE11EM.SQL **************************/
USE [master]
GO
/****** Object: Database [IE11EntModeDB] ******/
CREATE DATABASE [IE11EntModeDB]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'IE11EntModeDB', FILENAME = N'c:\SQL\IE11EntModeDB.mdf' , SIZE = 3072KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
LOG ON
( NAME = N'IE11EntModeDB_log', FILENAME = N'c:\SQL\IE11EntModeDB_log.ldf' , SIZE = 1024KB , MAXSIZE = 8GB , FILEGROWTH = 10%)
GO
ALTER DATABASE [IE11EntModeDB] SET COMPATIBILITY_LEVEL = 110
GO

USE [IE11EntModeDB]
GO

/****** Object: User [svc.ie11entmode] ******/
CREATE LOGIN [svc.ie11entmode] WITH PASSWORD = '[email protected]!';
GO
CREATE USER [svc.ie11entmode] FOR LOGIN [svc.ie11entmode] WITH DEFAULT_SCHEMA=[dbo]
GO
ALTER ROLE [db_datareader] ADD MEMBER [svc.ie11entmode]
GO
ALTER ROLE [db_datawriter] ADD MEMBER [svc.ie11entmode]
GO
/****** Object: Table [dbo].[IEEntMode] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[IEEntMode](
[key] [int] IDENTITY(1,1) NOT NULL,
[Source] [nvarchar](128) NULL,
[Url] [nvarchar](2048) NULL,
[Mode] [nvarchar](5) NULL,
[User] [nvarchar](64) NULL,
[TimeStamp] [datetime] NOT NULL CONSTRAINT [DF_IEEntMode_TimeStamp] DEFAULT (getdate()),
CONSTRAINT [PK_IEEntMode] PRIMARY KEY CLUSTERED
(
[key] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO
/****** Object: View [dbo].[vwIE11EnterpriseModeCountSummary] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE VIEW [dbo].[vwIE11EnterpriseModeCountSummary]
AS
SELECT Url, Mode, COUNT(Mode) AS tCount
FROM dbo.IEEntMode
WHERE (Url <> '')
GROUP BY Url, Mode


GO
/****** Object: View [dbo].[vwIE11EnterpriseModeDomainDetail] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE VIEW [dbo].[vwIE11EnterpriseModeDomainDetail]
AS
SELECT LEFT(Url, CHARINDEX('/', Url, 9) - 1) AS Domain, Mode
FROM dbo.IEEntMode
WHERE (Url <> '')
GO

USE [master]
GO

ALTER DATABASE [IE11EntModeDB] SET READ_WRITE
GO
/************************** End IE11EM.SQL **************************/

Alright, almost there. The last step is to place the aspx page in our web directory. Save the code below as ie11em.aspx into the c:\ie11em directory created earlier.  Make certain to substitute your server's name in the connection string where you see "%YourServerNameHere%" listed.

/************************** Start ie11em.aspx **************************/

<html><head><title>IEEM Logging</title></head><body>

<%
'Dim objFSO, objTextFile, csvFile

Response.write("'" & Request.ServerVariables("REMOTE_HOST") & "','" & Request.ServerVariables("AUTH_USER") & "','" & Request.Form("URL") & "','" & Request.Form("EnterpriseMode") & "'")

Dim sqlConnection1 As New System.Data.SqlClient.SqlConnection("Server=%YourServerNameHere%; Database=IE11EntModeDB; User Id=Svc.ie11entmode; [email protected]!";)

Dim cmd As New System.Data.SqlClient.SqlCommand
cmd.CommandType = System.Data.CommandType.Text
cmd.CommandText = "INSERT IEEntMode ([Source],[User],[Url],[Mode]) VALUES ('" & Request.ServerVariables("REMOTE_HOST") & "','" & Request.ServerVariables("AUTH_USER") & "','" & Request.Form("URL") & "','" & Request.Form("EnterpriseMode") & "')"
cmd.Connection = sqlConnection1

sqlConnection1.Open()
cmd.ExecuteNonQuery()
sqlConnection1.Close()

cmd = Nothing
sqlConnection1 = Nothing

%>
</body></html>

You can now test your IE 11 data collection site by putting "https://%YourServerNameHere%/ieem.aspx" in your browser.  If everything is working you will see '192.168.1.24','domain\username','','' appear in the browser, where 192.168.1.24 is the IP of the system you opened the browser on, and the domain\username is the user you are logged in with.  Since we are not passing the same variable strings the browser does, you will see blanks in the last 2 values.  Once you plug this URL value in the registry key to enable Enterprise Mode you will also see the URL passed from the browser, and a binary On or Off value.  (HKLM\Software\Policies\Microsoft\Internet Explorer\Main\EnterpriseMode enable="https://%YourServerNameHere%/ieem.aspx")  

Once you have updated the registry, restarted IE, and you have enabled Enterprise Mode on a few sites.  Use the query below to see the data you are collecting.

SELECT [key],[Source],[Url],[Mode],[User] ,[TimeStamp] FROM [dbo].[IEEntMode]

Next up in part 2:  A few SSRS reports to rollup the data and a runbook to rollout the sitelist to, you guessed it, the website above.

Until next time, cheers!

MMS

MMS MoA Nov 10-11-12 2014

For those of you who attended our last meeting, you might recall our survey about a conference this fall.   Well it's now official:  The Midwest Management Summit.  Watch the Twitter feeds for #MMSMinnesota and #MMS.  And learn about the conference at our site: http://mms.mnscug.org

And follow MMS on Facebook: https://www.facebook.com/MidwestManagementSummitMN

MNSCUG June Meeting Notes

Fred's Notes from the Active Directory Best Practices from Robert Wakefield with NowMicro:

Backup of group policies via GPMC or script. Most GP admins do not backup group policy objects.

It's not a bad idea to backup directly from the GPMC

Use PowerShell to backup GPOs, this can be scheduled. Group policy backup via PS doesn't get links or security, etc. Only the object itself.

Use a central store to gather and distribute ADMX files.

Purging Data from a SQL DB, Selectively.

I had this odd request to restore a database from backup, then clear all the data from the database with the excption of a dozen tables.  OK, I can do that but there is 1200 tables and I have other work to do too.  And you may notice from the tables it was the CM database.  So lets use a cursor against the list of tables and truncate some data!

USE CM_SMS

declare @name nvarchar(256)

declare cur CURSOR LOCAL for

SELECT name

FROM sys.Tables

--Keep data in these tables

where name not IN('Computer_System_Data','Network_Data','PC_Bios_Data','Operating_System_Data','Processor_Data','PC_Memory_Data','System_Enclosure_DATA','Disk_Data','Video_Controller_Data','WorkstationStatus_DATA','SYSTEM_CONSOLE_USER_DATA','Add_Remove_Programs_64_Data','Add_Remove_Programs_Data','SoftwareFile','System_DISC')

open cur

fetch next from cur into @name

while @@FETCH_STATUS = 0 BEGIN

    BEGIN TRY

     EXECUTE('TRUNCATE TABLE ' + @name)

       print 'Success: ' + @name

    END TRY

     BEGIN CATCH

          print 'Failed: ' + @name

       END CATCH

    fetch next from cur into @name

END

close cur

deallocate cur

Update_ComplianceStatus report inconsistencies on your CAS?

Do you have a CM hierarchy (CAS and Primary site(s)) deployed, and have inconsistencies with update compliance status reporting?   Normally, triggering a "RefreshServerComplanceState" on the troubled endpoints does the trick. (Which by the way, click here if you'd like to see my colleague Sherry Kissinger's awesome blog about making this automated in your environment by making it a CI/Baseline, which has worked great for us.) Which would “refresh” and send the actual patch compliance state of the troubled clients up the chain and fix the issue.   But what if this doesn’t work, or completely resetting the machine policy doesn’t cut it, and your patch reporting is still having discrepancies?   Which what we've run into in our environment just here recently. But how does this happen? When dealing with a hierarchy in a large environment, you’re bound to have network hiccups, storage issues, outages, DRS issues, data corruption, and etc… Which all could result into data and/or state inconsistencies. Nonetheless, here’s how we addressed the issue.

Determining the issue:

When querying the CAS database for the "required" patches that need to be applied on the client in question (Status 2 means Required/missing), using SQL query below…  

DECLARE @Resourceid int = (select resourceid from v_r_system_valid s1 where s1.netbios_name0 = '<ENTER CLIENT HOSTNAME HERE>')
If @ResourceID IS NOT NULL
BEGIN
-- get update info
DECLARE @Updates TABLE (CI_ID INT, BulletinID VARCHAR(64), ArticleID VARCHAR(64), Title VARCHAR(512), Description VARCHAR(3000), InfoURL VARCHAR(512), Severity INT,
  IsSuperseded INT, IsExpired INT, DateLastModified DATETIME);
INSERT INTO @Updates
SELECT  CI_ID, BulletinID, ArticleID, Title, Description, InfoURL, Severity,
  IsSuperseded, IsExpired, DateLastModified
FROM
  dbo.v_UpdateInfo ui
  Where Severity is not null
SELECT  upd.BulletinID, upd.ArticleID, ucs.Status, ucs.LastStatusChangeTime, ucs.LastStatusCheckTime,
  CASE WHEN ucs.Status IN(1,3) THEN 'GreenCheck' ELSE 'RedX' END AS StatusImage,
  case
  when upd.Severity = 0 then 'None Declared'
  when upd.Severity=2 then 'Low'
  when upd.Severity=6 then 'Moderate'
  when upd.Severity=8 then 'Important'
  when upd.Severity=10 then 'Critical'
  else cast(upd.Severity as nvarchar) end as 'Severity', upd.IsSuperseded, upd.Title,
  upd.Description,
  upd.InfoURL,
  upd.DateLastModified [Last Modified Date by Microsoft]
FROM
  @Updates upd
  JOIN dbo.v_Update_ComplianceStatusAll ucs
    ON upd.CI_ID = ucs.CI_ID
       AND ucs.ResourceID = @ResourceID
       AND ucs.Status =2  --(only required)
       AND upd.IsExpired = 0
ORDER BY
  upd.severity desc, upd.IsSuperseded, upd.DateLastModified desc, ArticleID desc
END

We would get about over 60 various patches including both Office 2013 and Office 2016 patches.  Um...  Hmm, the workstation doesn't even have Office 2013 installed!  Hmmm...

But then, when checking the client itself for missing patches, using the POSH cmd line below:

get-wmiobject -computername <HOSTNAME HERE> -query "SELECT * FROM CCM_UpdateStatus" -namespace "root\ccm\SoftwareUpdates\UpdatesStore" | Where-object { $_.Status -eq "Missing" } | select-object Title, Status

We'd only get about 19 patches that are required/missing.

How do we fix this?

As mentioned earlier, normally triggering the RefreshServerComplianceState at the device level, fixes the issue.

invoke-command -ComputerName "<HOSTNAME HERE>" -Scriptblock {$SCCMUpdatesStore = New-Object -ComObject Microsoft.CCM.UpdatesStore; $SCCMUpdatesStore.RefreshServerComplianceState()}

But this doesn’t appear to have worked in our case… We were still having inconsistencies even after executing this option. Thus, after looking at the clients extensively, it turns out that the clients are clean, healthy, scanning properly, and reporting inventory up the chain without any issues. We then checked the Primary sites’ databases by using the SQL same query above to check our clients for their "Required" updates. And the results match exactly what the troubled clients have, for missing and required patches.  It appears that the CAS has more stale patch articles' state than the Primary site databases do, and there are definitely discrepancies between the two.  

So how do we fix this?   We opened up a case with Microsoft support, and they recommended to run the following against the CAS db in SSMS, which is an eyeopener for me... I had no idea we could initialize a synchronization JUST on a single article and not the entire replication group! NOTE: This would scare you, but it's really not.  I really thought it would trigger a replication for the entire General_Site_Data group, but no...  Just for the specific article we're having issues with.  And in fact, this processed REALLY fast for us.   Nonetheless, below is the magic wand :)  (Drum Roll....)  

EXEC spDrsSendSubscriptionInvalid '<CAS site Code>', '<Primary_Site_Code>', 'General_Site_Data', 'Update_ComplianceStatus'

This triggers a replication request from the CAS only for the "Update_ComplianceStatus" article from the target Primary site server.  Under the CAS’s rcmctrl.log, here's what you would start to see:

 

  At the Primary site level, it accepts the request from the CAS and processes it for Update_ComplianceStatus table (only), and it creates the cab file of data for it to be sent to the CAS. Below is shown in rcmctrl.log, along with the # of columns to be processed.

Once the Cab file is done being created in a cab format, it then sends it to the CAS for processing.   Monitor the sender.log at the Primary site server, if you’d like to see the progress.
Back at the CAS, once it receives the cab, it then processes it by removing the old data for Update_ComplianceStatus and replace it with new, along with the # of rows/records to be processed.

 During this stage, the CAS DRS status switches to 'Maintenance Mode" status, and a couple of replication groups may be degraded during this. To check the DRS status by running "exec spDiagDRS" in SSMS against the CAS db.

 

When the CAS finishes the processing of the bcp, the status is recoreded in rcmctrl.log as shown below.

 

 

Run "exec spDiagDRS" once again to check the overall DRS status of the hierarchy. The CAS should be off of the “Maintenance Mode” status at this point.
Now, try running the SQL query we use above to check for the client(s) patch status once again. Voila! The discrepancies that were there before should now be gone. Big thanks to our MS support folks for helping us resolve this issue!

 

Copyright © 2019 - The Twin Cities Systems Management User Group