Just a word on drawing (incorrect) conclusions from any of the performance measurement commands referred to in the answers. There are a number of pitfalls that should taken in consideration aside from looking to the bare invocation time of a (custom) function or command.
Sjoemelsoftware
'Sjoemelsoftware' voted Dutch word of the year 2015
Sjoemelen means cheating, and the word sjoemelsoftware came into being due to the Volkswagen emissions scandal. The official definition is "software used to influence test results".
Personally, I think that "Sjoemelsoftware" is not always deliberately created to cheat test results but might originate from accommodating practical situation that are similar to test cases as shown below.
As an example, using the listed performance measurement commands, Language Integrated Query (LINQ)(1), is often qualified as the fasted way to get something done and it often is, but certainly not always! Anybody who measures a speed increase of a factor 40 or more in comparison with native PowerShell commands, is probably incorrectly measuring or drawing an incorrect conclusion.
The point is that some .Net classes (like LINQ) using a lazy evaluation (also referred to as deferred execution(2)). Meaning that when assign an expression to a variable, it almost immediately appears to be done but in fact it didn't process anything yet!
Let presume that you dot-source your . .\Dosomething.ps1
command which has either a PowerShell or a more sophisticated Linq expression (for the ease of explanation, I have directly embedded the expressions directly into the Measure-Command
):
$Data = @(1..100000).ForEach{[PSCustomObject]@{Index=$_;Property=(Get-Random)}}
(Measure-Command {
$PowerShell = $Data.Where{$_.Index -eq 12345}
}).totalmilliseconds
864.5237
(Measure-Command {
$Linq = [Linq.Enumerable]::Where($Data, [Func[object,bool]] { param($Item); Return $Item.Index -eq 12345})
}).totalmilliseconds
24.5949
The result appears obvious, the later Linq command is a about 40 times faster than the first PowerShell command. Unfortunately, it is not that simple...
Let's display the results:
PS C:\> $PowerShell
Index Property
----- --------
12345 104123841
PS C:\> $Linq
Index Property
----- --------
12345 104123841
As expected, the results are the same but if you have paid close attention, you will have noticed that it took a lot longer to display the $Linq
results then the $PowerShell
results.
Let's specifically measure that by just retrieving a property of the resulted object:
PS C:\> (Measure-Command {$PowerShell.Property}).totalmilliseconds
14.8798
PS C:\> (Measure-Command {$Linq.Property}).totalmilliseconds
1360.9435
It took about a factor 90 longer to retrieve a property of the $Linq
object then the $PowerShell
object and that was just a single object!
Also notice an other pitfall that if you do it again, certain steps might appear a lot faster then before, this is because some of the expressions have been cached.
Bottom line, if you want to compare the performance between two functions, you will need to implement them in your used case, start with a fresh PowerShell session and base your conclusion on the actual performance of the complete solution.
(1) For more background and examples on PowerShell and LINQ, I recommend tihis site: High Performance PowerShell with LINQ
(2) I think there is a minor difference between the two concepts as with lazy evaluation the result is calculated when needed as apposed to deferred execution were the result is calculated when the system is idle