We ran into an issue with Microsoft Remote Assistance (MSRA) after disabling RC4 encryption support. I was having a hell of a time troubleshooting the issue and eventually resorted to WireShark for troubleshooting.
WireShark for the MSRA traffic showed that the Encryption type used for MSRA is AES as it should be. No problem there. Then I looked at the kerberos traffic specifically and seen alternating KRB5KCD_ERR_S_PRINCIPAL_UNKNOWN and KDC_ERR-ETYPE-NOSUPP. So the issue was not MSRA but kerberos.
Diving further I found that the TGS-REQ packet in WireShark showed the principal target was not the machine as I expected, but instead, the end-user. So, fun fact there, when you MSRA to a machine, your kerberos ticket is generated for the end-user, not for the machine account. We checked the “This account supports AES” check boxes in AD for the target user, and still the issue occurred.
I checked the logs on the domain controller and came across one in the Kerberos-Key-Distribution-Center category. It was a KDCEVENT_NO_KEY_INTERSECTION_TGS which stated “While processing a TGS request for the target server, the account did not have a suitable key for generating a kerberos ticket (the missing key has an ID of #). The requested etypes were # 23 #. The accounts available etypes were 23. Changing or resetting the password of will generate a proper key.” This pretty much explicitly stated the fix.
While we had matching supported encryption methods, the target user had to change their password for the AES support to kick in! So, if you disable RC4 support, ensure your user accounts have the AES check boxes set, and then make sure they change their passwords shortly after, or at least before you need to use MSRA to assist them.
As I was writing my previous post on optimizing Powershell, I thought of other tips I have had to use to speed up scripts in relation to getting data into PowerShell. Like before, I will start with a summary of recommendations and move onto details.
Summary
Silence your scripts. Any text printed to the console comes with severe time overhead. If you need progress updates, make sure you use Write-Progress over Write-Host.
If you are looking up data in an array at random, then turn your array into a hash-table instead.
When querying a server or system for data, try pulling all the data you need all at once, instead of one at a time. This can speed up your scripts, even if you pull more data that you actually need. This recommendation does heavily depend on the system you are querying and how much extraneous data you get back.
Console Output
A quick side note here. Outputting text to the console is very slow. You can speed up some commands, by silencing the output of the command. You have a few different ways of doing this. Lets look at them.
Name
Method
Time (MS) per 5000 iterations
Piping to Out-Null
$I | Out-Null
107.8185
Saving out to $null
$null = $i
9.9016
So if you need to silence something, saving the output to a variable or $null is far faster than piping to Out-Null. Now that we know the faster method of silencing a command, lets see just how slow printing to the console is.
Name
Method
Time (MS) per 5000 iterations
Print Command
Write-Host $I
4434.8804
Silenced
$null = $I
9.9016
That is around 500 times faster. So if you need speed, consider removing unneeded Write-Host commands, or silencing functions by saving their output to $null. Some good news though, Write-Progress is fairly safe to use.
Name
Method
Time (MS) per 5000 iterations
Write Progress
Write-Progress
642.605
So use Write-Progress over Write-Host if you need progress updates. The script used to pull these metrics:
PowerShell often returns data in Arrays. These arrays are not very fast to query for a single item however. This does not matter for small arrays, or if you will iterate through each item in random order, however if you need to pull a single item out of the array based on one of it’s properties, it can be slow unless you do something to index the data.
The most common method I use, is I turn the array into a hash-table. This only works if the property you are looking each object up with is unique to the array.
I will not focus on the speed metrics here, I already covered hash-table metrics in my last post. I want to show you -how- to convert an array into a hash-table.
First, you need to choose a property that you will query the data on. This is more often than not the object name. If you are querying users from AD, this could be the sAMAccountName or something similar. The only restriction, is that for every object in the array, this property must be unique!
Another tool you can use to speed up searches, is to sort your arrays and use BinarySearch. This does not really work on generic arrays, so if you go this route, make sure you use strongly typed arrays. This also works best on arrays of core data types (Int, string, float, etc), instead of complex objects. If you need to search an array of complex objects based on one of their properties, consider hash-tables instead. Otherwise, you would need to create your own IComparable class…
Lets see how to create, sort, and search these arrays. To create the array, use the .Net method of creating them. In these examples, I will use a string array.
$ItemArray = [string[]]::new($ItemCount)
Next fill in your array with your data. Then, call the sort method on your array. This sort method would be where you enter your custom IComparable object. IComparable objects already exist by default for the core data types, so it is not needed for a string array.
[Array]::Sort($ItemArray)
Finally, call the BinarySearch() function when search for an item in the array, or for the existence of an item in the array. Instead of $ItemArray.Contains() use:
If you are querying alot of data, this is likely a bottleneck in your script, there are some ways you can speed this up however. In general, pulling all your data at once is faster than pulling individual objects one at a time. This applies to many commands but I can attest to Get-ADUser and Get-Item/Get-ChildItem. To the metrics!
Name
Method
Time (MS) per 500 iterations on 100 files
Get files 1 at a time
Get-Item -Path <FilePath>
11227.0608
Get all files
Get-ChildItem -Path <FolderPath>
1148.6165
Get all file names
Get-ChildItem -Path <FolderPath> -Name
664.0743
Get all files by wildcard
Get-Item -Path “<FolderPath>\*”
4060.0878
We can see that pulling all files is faster than pulling them one at a time. Also, if you only need the file names, then adding -Name to Get-ChildItem is faster than having PowerShell grab all file info.
This does not tell the full story. What about filtering it? When we pull one at a time, we have the one file that we need, but if we pull all of them, then we need to search our array and that adds time. But how much? Not a lot if you create a hash-table first!
This is so much faster, that even if you pull twice as many files as you need, it is still faster than pulling the files one at a time! In this next example, I doubled the files in the directory, but still only query for 100.
Name
Method
Time (MS) per 500 iterations on 100 out of 200 files
So even if we pull twice as many files into the hashtable than we need to query, it is still twice as fast as pulling the files one at a time! Note that when creating the hashtable, I am using the .Add() function. This is far faster than the $HashTable+=@{Key=Value} per my previous post. For the script I used to pull these metrics, see:
Sometimes, PowerShell is slow, especially when you are dealing with a large amount of data, but there are ways of speeding things up depending on what you are doing. This post will focus on how to speed up loops, arrays and hash-tables. All metrics were gathered in Windows 10 1909 PSVersion 5.1. Lets start with the summary.
Summary
Pre-initialize your arrays if possible. Instead of adding things to your array one at a time, if you know how long your array needs to be, create it at that length and then fill it.
If you do not know what length the array will be, create a list instead. Adding objects to lists is far faster than adding to an array.
If you need to do random lookups on a set of data, consider sorting your Array/List and then call BinarySearch()
Avoid searching by piping an array to Where-Object, either turn it into a hashtable, or sort the array and use BinarySearch()
When adding items to a hash-table, use the Add() function
When looping through objects, consider using a normal foreach(){}
Arrays and Lists
Now for the actual metrics. You can find the scripts used under each section. For this section, we’ll look at arrays/lists. First, creating and filling.
Most methods of creating and filling arrays are fairly similar. The only noticeable slowdown is if you use PowerShell’s native array, and do not pre-initialize it. This is because on the back-end, whenever you add to the array, the computer effectively re-creates the entire array with each add.
Now on to read performance. In this test, I am just using a simple .Contains() check. While the performance does vary depending on the type of array, we are sub 1-second for 10000 iterations. This is not noticeable to humans. The only noticeable difference is if you pipe your array to Where-Object for searching. That took 11 minutes! If you really need speed though, sorting your array and using BinarySearch is the way to go.
Name
Method
Time (MS) per 10000 iterations
Native Array Contains
$PSArray.Contains($i)
177.2726
.Net Array Contains
$PSArray.Contains($i)
38.632
.Net List Contains
$PSArray.Contains($i)
87.9633
.Net List BinarySearch
$PSArray.BinarySearch($i)
23.1007
Native Array with Pipe Filtering
$PSArray | Where-Object {$_ -eq $i}
680831.936
Lets look take a look at hash-tables. Hash-tables are useful as they allow you to assign a key to an object, and then query that quickly at a later time. When adding to hash-tables though, the computer has to make sure the key being added is unique to the hash-table. This has a noticeably negative effect when using the Native PowerShell hash-table. At 22 seconds for 10000 items added to the hashtable, this is still do-able for most scripts. That said, if you add more items to it, it just keeps getting slower. A quick and easy change is to use the Add() function instead of the $HashTable += @{} pattern. If you do that, then there is no real performance difference between the native PowerShell hashtable and a .Net Dictionary.
To exemplify how slow hastables can get the more items you add, I charted it out.
Well, that is not super useful is it. All it shows is using the $HashTable += @{} is so slow, the other methods don’t even register. Lets look at that in log10 scale.
Definently make sure you use the .Add() function for any large hash-table!
For reading hash-tables, I just checked how quickly keys could be searched. Both .Net and the native method of creating hash-tables were suitably fast.
Name
Method
Time (MS) per 10000 iterations
Native Hashtable Contains
$PSArray.ContainsKey($i)
31.041
.Net Hashtable Contains
$PSArray.ContainsKey($I)
21.0664
Scripts used for metrics gathering and the Excel sheet used to create charts.
Finally lets look at loop performance. If you need to perform some action on every item in a collection, you have several options. It would take large arrays to notice much of a difference in which method you use, but in my tests, using a foreach(){} loop outperformed all other methods and piping a collection to foreach-object {} had the worse performance.
We had some users complaining about old Adobe Reader updates not installing from WSUS. The issue was inconvenient, but as soon as SCCM pushed more recent Adobe updates to the user, the issue went away. We decided to expire these old updates and remove them, however there was an issue. Whenever we attempted to publish the update as expired from SCUP, we got Verification of file signature failed for file: <Some cab file path here>. I had issues like this before and tried to remove it using PowerShell/.Net instead. I have had to do this before when we lost our SCUP database file. My go-to code for that is:
#This code largely from https://myitforum.com/how-to-expire-a-custom-update-in-wsus-using-powershell/
#Run this from WSUS for central site server
#Load .NET assembly
[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")
#Connect to WSUS server
$wsusrv = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer()
#Get all the non-microsoft updates
$otherupdates = $wsusrv.GetUpdates() | select * | ? {$_.UpdateSource -ne "MicrosoftUpdate"}
#$wsusrv.GetUpdate($_.id)}} #Get more info on a specific update
$otherupdates | where-object { <#$_.id -eq "" -or #> $_.title -like "*adobe*"} | foreach-object {$wsusrv.ExpirePackage($_.id)}
This still did not work however, the script returned the exact same error as when the update is expired using SCUP. It turns out, the SCUP certificate that signed these cab files had expired about a month prior to this issue. In a last ditch effort, we were able to expire these updates by rolling the server time back to a time when the certificate was still valid. We were then able to re-publish the updates as expired from SCUP and the issue was resolved.
I have played around with the idea of making a single-file HTML report easily exportable from PowerShell before. A couple of these used to be hosted in the old version of this blog. We recently had to rebuild a report at my office and I decided it would be a good time to make another go at an HTML reporting framework. This time, something more generalized and customizable.
The end result this time is a framework which will take an HTML template one or more CSS templates, images, custom outputs from scripts, and combine these resources together into a single-file report that could be sent out without any dependency files. The idea behind the separate template elements is to keep the report structure, design, and the scripts relatively separate preventing a massive monolithic monstrosity. If you need to add a new item to the report, say cpu utilization or some other metric, you could just add a new child script. If you need to adjust the colors used in the report, but not the contents itself, you can just edit the CSS file master template or if you need to adjust the structure of the report, you could do so, without ever having to touch the PowerShell scripts responsible for gathering the information being reported.
The main script looks at custom tags in the template itself to fill in the final report with the necessary information from child scripts. If this sounds like something that could be useful, checkout the project on Github.
We ran into additional issues with our Adobe Update Server. The issue was that clients were not using the internal Adobe Update server despite having the override files pointing them to the internal Adobe Update Server. WireShark showed that clients with the override file would connect to the server, download a single file, and then continue any further update attempts utilizing Adobe’s servers. Turns out, in our case, there were some additional URLs being blocked that prevented a complete Adobe Update Server setup. I have no clue which ones, however, if you have this issue, ensure your update server can reach the URLs/domains contained in Adobe’s endpoint documentation.
Going through this process I learned a few troubleshooting/configuration tricks from the client’s side. In no particular order, here are some notes:
Try the Remote Update Manager (RUM) locally on the client. It can usually be found at “C:\Program Files (x86)\Common Files\Adobe\OOBE_Enterprise”. If it fails to download from your update server, it should return an error in the console that may point you in the correct direction.
Ensure you check the client’s “%TEMP%\CreativeCloud\ACC\ACC.log” and “%TEMP%\CreativeCloud\ACC\AdobeDownload\DLM.log” log files.
Turn on directory browsing in IIS (If using IIS to host the update files) and ensure that you can download each type of file from your updates url.
Override files must go “C:\Program Files (x86)\Common Files\Adobe\UpdaterResources” and “C:\ProgramData\Adobe\AAMUpdater\1.0”. One of those directories does not exist by default, create it.
You can enable the Apps tab for clients in the “C:\Program Files (x86)\Common Files\Adobe\OOBE\Configs\ServiceConfig.xml” file.
I was setting up an Adobe Update Server using the Adobe Update Server Setup Tool, and it was just not working. It kept failing with error codes 2 and 4 in the console. I was pretty sure that it was our proxy that was causing the grief.
I opened procmon and monitored the AUSST tool, I found that it was not even attempting to use the proxy. I checked the C:\Users\<user>\AppData\Local\Temp\AdobeUpdaterServerSetupTool.log but only found what the console was telling me. General network issues. In procmon however, I seen it was writing to a C:\Users\<user>\AppData\Local\Temp\CreativeCloud\ACC\AdobeDownload\DLM.log. The console does not tell you about it however. I opened that one up and got a little further, though not much. It did give a better error then just network issues though. This time it gave me “failed to resolve the proxy setting on the machine” along with error 12180. If you look up 12180, it largely repeats the last error, just that the proxy settings failed. Finally out of desperation I was changing the proxy settings manually and found that, if “Auto Detect” was checked in your proxy settings, then AUSST would just give up proxy, whether you have a pac file, or manually entered proxy settings or not.
I immediately ran into another issue. This one was a bit easier to troubleshoot. It was still failing to download files, but at least it actually made connections. In AdobeUpdaterServerSetupTool.log I seen “Failed to download icons”, “Failed to complete migration” and “Internal error occurred”. Now that I knew about the DLM.log, I checked that too. That was spitting out error 12175. This was again more helpful than the AdobeUpdateServerSetupTool.log’s output. If you look that error up, it roughly translates to SSL error occurred. I popped open WireShark and decided to see if I could find any handshake errors or anything else. I did not, but I did find the URL of the file that the machine was attempting to contact. I put that URL directly into Internet Explorer and immediately was met with a SSL trust error. The Root certificate for Adobe was not installed on the machine. I imported the cert and tool started working after that.
TLDR;
If you get general network type errors in AdobeUpdaterServerSetupTool.log and error 12180/”failed to resolve the proxy setting on the machine” in the DLM.log, uncheck “Auto Detect” in your proxy settings
If you get http security/12175 in DLM.log and possibly “Failed to download icons”, “Failed to complete migration” and “Internal error occurred” in AdobeUpdaterServerSetupTool.log, ensure you don’t have any issues trusting Adobe’s cert chain used on their update server.
Update: I had an additional issue with our update server. You can read it and some general client-side troubleshooting steps in this post.
I wanted to automate our user management of Adobe Creative Cloud. This requires interfacing with Adobe’s user management API. One of the coolest functions I created in this initiative allows you to synchronize an adobe group based on an Active Directory group. I intend to use this AD Group for AppLocker, SCCM deployments, and syncing to Adobe Creative Cloud. This should largely automate the entire Creative Cloud deployment and reduce administrative overhead. The end result will be a single administrative user adds someone to the “Approved CC Users” group, and everything else is hands free.
See the GitHub repo for the PowerShell script and additional information and resources.
Continuing my PowerShell automation notes for SCCM. Below is a rough example on how to deploy Applications in SCCM 2016 using PowerShell. The real meat of it comes down to 5 cmdlets. As an extra goodie, also included the cmdlet to remove old deployments as well. Note: If you are using these cmdlets on a new machine or account, the account that is to run these cmdlets should open the SCCM console on that machine, and click the “Connect with PowerShell” option first. If the account has not performed these steps, the SCCM drive will not be available when the SCCM cmdlets are imported. You will see errors such as “A drive with the name ‘xyz’ does not exist.”
Command Run Down
The core commands we are interested in are
New-CMApplication # Creates a new application in SCCM
Add-CMScriptDeploymentType # Adds a script based deployment type or optionally
Add-CMMsiDeploymentType # Which will add an MSI deployment type
An additional note here on these two. These are currently your only deployment options and this directly limits your installation detection options. MSI is locked down to using the MSI product GUID for installation detection. If you need anything more complex than that, you are pretty much stuck with using a script based detection method and the script deployment type. The registry key and file options are not currently available. Luckily you can do nearly any detection method in PowerShell. The script below for example checks a registry key. For more information on creating a script for PowerShell based installation detection methods see the relevant docs.microsoft.com article and also David O’Brien’s blog.
Start-CMContentDistribution # Distributes our content to our Distribution Points
Start-CMApplicationDeployment # Actually deploys our finished application package to end users
Remove-CMDeployment # Removes deployments. Useful if you have an older deployment you are replacing
Move-CMObject # Moves your Application to a different folder within the SCCM console
Example Script
Param
(
[string]$PackageDirectory="\\Path\To\Package\Source\",#Package source
[string]$IconPath="C:\SomePath\SomeIcon.ico",#Icon to show in software center
[string]$SCCMDrive="SCM:\",#Should be your 3 character site code generally
[string]$SCCMAdmin="john.doe",#Owning admin
[string]$TargetCollection="All Windows Workstations",#Collection to deploy to
[string]$LogPath="C:\Logs\somelog.log",#Path to save log
[string]$TargetDPs = "All Distribution Points"#Distribution point group to deploy to
)
#Start a log
$Log = "Starting Package Script`r`n"
try
{
#Import SCCM Module
Import-Module "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1" -ErrorAction Stop
if (-not (Test-Path -Path $SCCMDrive))
{
$Log+="SCCM PowerShell cmdlets provider is not initialized for this account, on this machine. Please open the SCCM console and select 'Connect with PowerShell' at least once before using this script on thise machine.`r`n"
throw "SCCM PSProvider does not have a drive assigned"
}
catch
{
#End script if we could not add module
$Log += "Failed to add required module!`r`n"
$Log += "----End Package Script----`r`n"
$Log | Out-File -FilePath $LogPath -Append
exit 1
}
#TODO: Prepare files as needed here
#Maybe dynamically get file verison, or application name, unzip files if needed, etc
$Version = "1.0.0.0"
$ProductName = "Sample Application"
$ApplicatioName = "$ProductName $Version"
$Publisher = "ACME"
$InstallCommand = "`"SomeInstaller.exe`" /s"
$DetectScript = "if (Test-Path `"HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\MyProdct`"){ if ((Get-ItemProperty -Path `"HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\MyProduct`" -Name `"DisplayVersion`").DisplayVersion -eq `"$Version`") { Write-Host `"Installed`" } } exit 0"
try
{
$Log += "Creating `"$ApplicationName`"`r`n"
#Change to SCCM powershell provider, SCCM cmdlets generally do not work otherwise, however, some cmdlets may fail in the SCCM drive. Keep this in mind, you may need to switch between providers
CD $SCCMDrive
#Create a new application. (Won't deploy, won't distribute, won't create deployment type. Just the application info)
$MyNewApp = New-CMApplication -Name $ApplicationName -Description "Auto-Added by Packager.ps1" -Publisher $Publisher -SoftwareVersion $Version -LocalizedName $ApplicationName `
-Owner $SCCMAdmin -SupportContact $SCCMAdmin -IconLocationFile $IconPath -ErrorAction Stop
#Move the application
$Log += "Moving`"$ApplicationName`"`r`n"
$MyNewApp | Move-CMObject -FolderPath "$($SCCMDrive)Application\SomePath\ToPlace"
$Log += "Creating `"$ApplicationName`" - Install`r`n"
#Add a deployment type to the new application, this won't distribute or deploy it
Add-CMScriptDeploymentType -ApplicationName $ApplicationName -ContentLocation $PackageDirectory -ContentFallback -EnableBranchCache -InstallCommand $InstallCommand `
-LogonRequirementType WhetherOrNotUserLoggedOn -SlowNetworkDeploymentMode Download -UserInteractionMode Hidden -InstallationBehaviorType InstallForSystem `
-DeploymentTypeName "Install" -ScriptLanguage PowerShell -ScriptText $DetectScript -ErrorAction Stop
#If you are doing an MSI install, look into "Add-CMMsiDeploymentType"
#https://docs.microsoft.com/en-us/powershell/sccm/configurationmanager/vlatest/add-cmmsideploymenttype
$Log += "Distributing `"$ApplicationName`" - Install`r`n"
#Distribute the content, doesnt deploy
Start-CMContentDistribution -ApplicationName $ApplicationName -DistributionPointGroupName $TargetDPs
$Log += "Deploying `"$ApplicationName`" - Install`r`n"
#Deploy the new application
Start-CMApplicationDeployment -CollectionName $TargetCollection -Name $ApplicationName -DeadlineDateTime ([DateTime]::Now) -AvailableDateTime ([DateTime]::Now) `
-DeployAction Install -DeployPurpose Required -OverrideServiceWindow $true -TimeBaseOn LocalTime -UseMeteredNetwork $true
$Log += "Stopping old deployments`r`n"
#Additionally, we can stop any old deployments.
#Grab all apps similiarly named to what we just deployed, but are not what we deployed
$Apps = @()+(Get-CMApplication -Fast | Where-Object -FilterScript {$_.LocalizedDisplayName -like "$ProductName *" -and $_.LocalizedDisplayName -ne $ApplicationName -and $_.IsDeployed})
foreach ($App in $Apps)
{
Write-Log "Deployment for $($App.LocalizedDisplayName) stopped`r`n"
#And remove their deployment rule
#You may need to change application name. My ApplicationName and LocalizedDisplayName usually match
Remove-CMDeployment -CollectionName $TargetCollection -ApplicationName $App.LocalizedDisplayName -Force
}
}
catch
{
Write-Log "Failed to create package. $($_.ToString())"
}
#Return to filesystem provider
cd "$($env:SystemDrive)\"
$Log += "----End Package Script----`r`n"
$Log | Out-File -FilePath $LogPath -Append
exit 0
For additional information on the cmdlets, please see the 2016 cmdlet reference at docs.microsoft.com.
I was recently troubleshooting an issue and needed to view the last few results of a log. CMTrace and text editors would crash due to the sheer size of the log. Powershell’s Get-Content with the “Tail” parameter worked like a charm however. Although this worked, I didn’t want to keep running the command over and over, so, I decided to replicate the watch command from Linux in PowerShell.
<#
.SYNOPSIS
Repetitively runs a script block to allow you to track changes in the command output. An example use would be for watching log inputs. Press CTRL+C to cancel script. It runs indefinitely.
.PARAMETER ScriptBlock
Script to execute
.PARAMETER Interval
How often to rerun scriptblock in seconds
.NOTES
Version: 1.0
Author: Matthew Thompson
Creation Date: 2017-07-19
Purpose/Change: Initial script development
.EXAMPLE
&"Start-Watch.ps1" -ScriptBlock {Get-Content -Path "C:\Logs\SomeLog.log" -Tail 20} -Interval 10
#>
Param([scriptblock]$ScriptBlock, [int32]$Interval=5)
#Put the real code in a function so it can be quickly copy-pasted as a child function of other scripts
function Start-Watch
{
Param([scriptblock]$ScriptBlock, [int32]$Interval)
#Set lowest possible datetime, so that it will run script immediatly
$Start = [DateTime]::MinValue
#Infinite loop, cancel require user intervention (CTRL+C)
while($true)
{
#If enough time has passed (Now - LastAttempt)>Selected interval
if ([DateTime]::Now - $Start -ge [TimeSpan]::FromSeconds($Interval))
{
#Clear console and call function
Clear-Host
$ScriptBlock.Invoke()
#Set new start time/last attempt
$Start = [DateTime]::Now
}
#Sleep the thread, prevents CPU from falsly registering as 100% utilized
[System.Threading.Thread]::Sleep(1)
}
}
#Call watch function
Start-Watch -ScriptBlock $ScriptBlock -Interval $Interval