Friday, February 25, 2011

REPOST - Powershell with a Purpose

PowerShell with a Purpose Blog
by Don Jones
Don's 18.8-minute PowerShell Crash Course (You CAN Learn This!)
Posted in [PowerShell Core]


Excuse me while I vent for a sec.


I'm getting a little frustrated with admins who tell me they don't have time to learn PowerShell because it's too complex or too complicated or because they don't understand it. Buck up, soldier. First of all, it isn't that hard - and I'm going to prove it, right here, right now. Sit back, relax, and grab a coffee. This is going to take a minute.


Now, I know examples like http://poshcode.org/1943 can make your head spin right off it's shoulders. Understand that PowerShell offers features to a few different audience levels, and one of those is "programmer." Just because you aren't a programmer doesn't mean you can't use PowerShell in a different way!


What's a good analogy, here? Carpentry. I'm no carpenter. Mitering a joint is entirely beyond me unless it's a very simply 45-degree angle cut and even then, I might lose a finger. There's no way I'm building a nice piece of furniture. But, I can swing a hammer hard enough to put up some simple shelves in the garage. Carpentry has a place for entry-level guys like me, as well as super geniuses like whats-his-face who used to work with Bob Vila. Norm. That guy. PowerShell is the same way: The master programmers can do some stuff with it, and schmucks like you and me can do other stuff - and WE don't need to be master programmers.


Can You Run a Command?


Open PowerShell. Go ahead, I'll wait. Now, run a command. Dir. Just type D, I, R, and hit Enter. Was that so hard? No. And you've just covered about one-third of what PowerShell needs you to know.


Want to get a directory of an entire folder tree? Run Dir -recurse. Ooo, look, you used a parameter. Tricky, right? No, of course not - you've probably been running commands with parameters for years. However, if you think this has already gotten too complicated, then PowerShell isn't for you. Stick with a GUI - preferably Office, in fact, because if you're not comfortable running basic commands that use parameters, I'm a bit scared of you running the network!


But seriously, PowerShell commands are just that: Commands. Some commands, like Dir, don't have any mandatory parameters - it defaults to getting a directory of the current path. But it does have lots of optional parameters. For example, if you want to get a directory of a different path, you can run Dir -path c:\windows. In fact, Microsoft figures you're going to want to do that a lot, so they made the -path parameter positional. So long as you put the path in the first position, you don't need to actually include the parameter name: Dir c:\windows will work just fine.


You can even combine parameters: Dir c:\windows -recurse will work. Again, because the path is in the first position, i don't need the -path parameter name. Saves typing. In fact, to save even more typing, you only need enough of the parameter name so PowerShell can tell which one you meant: Dir c:\windows -re will also work just fine. And if you don't want to remember what order the parameters go in, you don't have to! Just include the parameter names. When you do so, they can go in any order: awill also function quite well.


Guess what? Dir isn't a command. Technically, the real command name is Get-ChildItem. "Dir" is an alias, put in place so that you don't have to type a ridiculously long name like Get-ChildItem all the time. You could also use "gci" or "ls" in place of "dir," as they are also aliases of Get-ChildItem.


Some commands DO have mandatory parameters. For example, run Get-EventLog and it will prompt you for the parameter that you forgot to supply: LogName. Press Ctrl+C to break out of that, and then try running the command again by specifying that required parameter: Get-EventLog -log Security -newest 20. Hey, the -newest parameter isn't required, but it does keep the command from running forever and ever!


Where would you learn about all of these parameters? By reading the help. C'mon, you've done it before. You've run attrib /? or dir -? and stuff like that. PowerShell supports the same thing. Seriously. The formal way is to just run something like Help Dir, and if you want to see examples and whatnot, run Help Dir -example. I'll tell you something right now: If you're the kind of person who WILL NOT read the help under any circumstances, then just stop right here. You can't be successful with PowerShell unless you're willing to read the help, because a lot of its commands have tons of options and capabilities, and you'll never learn about them unless you read the help.


Bonus: The help command itself accepts wildcards. Run help * to see everything you can get help on - which will also show you what commands PowerShell has to offer. Again, this is THE way to find out what capabilities you have within the shell.


Adding More Commands


PowerShell comes with more than 400 built-in commands, but you can add more. There are two ways:


•A snap-in. Usually, these get installed when you install the management tools for a product like SQL Server. Just run Get-PSSnapin -registered to see what's installed, and something like Add-PSSnapin WindowsBackup to install a snap-in by name.


•A module. This is a newer way of distributing add-in commands. Run Get-Module -listAvailable to see what's installed, and Import-Module ActiveDirectory to load a module by name.


There is only one PowerShell. Microsoft has confused the issue mightily by creating Start menu shortcuts to the "Active Directory shell" or "Exchange Management Shell," but that's all smoke and mirrors. Those shortcuts (look at the shortcut properties if you don't believe me) simply execute the plain old PowerShell and tell it to automatically load a particular snap-in or module (often by specifying a console file, which is simply a list of snap-ins that should be loaded).


You can do anything from any copy of PowerShell simply by manually loading the right snap-in or module, and you can load as many as you want at the same time. Do Exchange stuff, AD stuff, System Center stuff, and SQL stuff all at once.


Piping Stuff


PowerShell commands run in a pipeline, which is just a fancy way of saying you can connect them together. Want your directory listing in a file?


Dir | Out-File c:\dir.txt


You've probably done something similar before, running Dir
More or something, right? Well, this is the same idea. The output of the first command is "piped" to the input of the second command. Not complicated. PowerShell has LOTS of commands that you might want to pipe stuff to: Export-CSV, Out-Printer, Out-GridView, Out-File, Format-Table, Format-List, Format-Wide, Export-CliXML, ConvertTo-HTML, and more.


There are a couple of tricks:


•Most Out- commands don't produce output in the pipeline. That means they're generally the last thing on the command-line, if you choose to use one.


•The type of output produced by a Format- command can only be understood by an Out- command. So, if you use a Format- command, it either needs to be the last thing on the pipeline, or it can be second-last if the last thing is an Out- command (other than Out-GridView, which doesn't actually understand what the Format- commands produce).


Working with Output


At this point, you should be thinking about how you'd work with some of that output. For example, run Get-Process. See how it produces a nice table? Now, filter that down to just the processes using more than 100MB of physical memory.


A traditional command-line jockey would want to do something like this: Pipe the process listing to a text file, and then pipe that text file to a command like Grep. Tell Grep to search for specific numbers in a specific column location (like, the 37th column), and then keep any lines which meet that criteria. Blah. What a ton of work. Anytime you're parsing text like that in PowerShell, you're working too hard. Why?


Okay, I know some of you reading this might be a developer or have a lot of shell experience. I'm going to gloss over some fine details that nobody cares about anyway, and it might upset you. If it starts to do so, avoid the urge to correct me by leaving a detailed technical comment. Instead, drink less coffee. breath deeply. Try yoga! I'm an admin and I don't really care what's going on inside Windows' head, or inside PowerShell's head. I'm interested in results.


The cool thing about PowerShell is that its commands don't just produce plain text. Instead, they produce what is basically a table of information in memory. With Get-Process, you're seeing a small portion of that table - the whole thing wouldn't actually fit on a screen unless you have, like, a 50" monitor running at some crazy-high resolution. So PowerShell usually only shows you a portion of the table. Run Get-Service and you'll see the same thing. But, in memory, that WHOLE table exists. If you want to check the contents of a certain column, you just refer to its column name. Consider this example:


Get-Process | Where { $_.PM -gt 100MB }


I'm getting all of the processes, and piping them to the Where command. Its job is to examine each row of that table, and remove the rows that don't meet my criteria. I specify my criteria in something called a script block, which is enclosed in those {curly braces}. What's inside that script block is what's special: I use a placeholder, $_, to refer to "the rows of the table." That placeholder is built into PowerShell, which is hard coded to look for that placeholder in certain circumstances. This is one of those circumstances, obviously. The underscore character actually looks like a horizontal line, which is what separates table rows, so that's how I remember what it's for.


A period indicates that I want to access a particular column of the table by name. In this case, I'm accessing the "PM" column of the table rows. I'm then using a PowerShell comparison operator, -gt. If you think back to math class, it's basically the same as the > operator. Then, I'm specifying 100MB. Yes, PowerShell understands the abbreviations KB, MB, GB, TB, and I think PB, which stands for Peanut Butter. No, Petabyte. Sorry.


Anyway, this is saying, "Keep those rows where the PM column contains a value greater than 100MB." PowerShell knows which column the PM column is - I don't have to tell it to count over 37 characters or whatever. It just knows. And it'll drop the rows that don't match my specification, and the result is whatever's left. I could then pipe that on to another command or two:


Get-Process | Where { $_.PM -gt 100MB } | Sort Name | Export-CSV c:\big-processes.csv


I'll bet you can predict EXACTLY what that command is going to do. Now, try running it and see if you're right.


This is really what PowerShell is all about. Wait a second, though - I said that commands like Get-Process and Get-Service were only showing you PART of the table that was actually being created in memory. How can you see the rest of it? How do you know what other columns you have to work with?


Well, the problem is that a table requires horizontal space, and the shell window only has so much to work with. One solution would be to convert the table into a vertical list, since scroll bars let us have a LOT more vertical space:


Get-Process | Format-List *


The * tells Format-List to convert EVERY table column into a list entry. Yeah, it takes up a lot of space on the screen, but it shows you EVERYTHING. Another technique is to use the Get-Member command, which ONLY shows the column NAMES - it doesn't try to list every column value.


Get-Process | Get-Member


You can use these tricks with just about any command that starts with "Get-," including Get-Service, Get-ADUser, Get-Process, Get-EventLog, and so on. In the output of Get-Member, table columns are identified as a "property" of some kind: NoteProperty, ScriptProperty, AliasProperty, and Property. The differences between those aren't important; all you need to know is that they comprise the available columns that the first command in the pipeline generated. So if you want to work with the output of one command, knowing what table columns are available is a good place to start.


Breather


Okay, if you've made it this far you'll do fine. In fact, you can already be pretty effective in the shell. You know how to add more commands, get help on commands, and get a list of what commands are available to you. You know how to see what columns are available in a command's output, and you've seen a couple of ways to work with those columns - filtering and sorting.


Do me a favor? If this is proving helpful, please tell a friend. I really want to get the word out that PowerShell isn't that hard. Just send 'em a link to this post and tell them to read it through a couple of times, and to tell a friend when they're done.


And there's more: I keep a running PowerShell FAQ, too, and you can even submit questions to it. I also use my Twitter feed to pass along interesting PowerShell tidbits - if you use Twitter, consider following my feed to get little shell tidbits throughout the week. I even do private training and have done a lot of PowerShell training videos, books, and whatnot - full details on my Website. A lot of it's free, too.


Now the Tricky Bit


I've glossed over some details in the preceding discussion, and we need to circle back and dive into them a bit. Doing so will reveal the main reason why PowerShell is so incredible, and will save you a TON of work once you grasp it. I will NOT pretend that what I'm about to show you is straightforward or intuitive - it took me a long time to really wrap my head around it. Hopefully, I can save you some of that time by explaining it as clearly as I can.


Look at a command like Stop-Service. Actually, look at it's help file - the full help file. Run help stop-service -full. Now, this command can work in one of two ways: You can either specify the name(s) of the service(s) you want to stop, or you can specify the service itself. The second bit is trickiest, so let's focus on the first one first.


Notice the -name parameter? That lets you specify the name of the service you want to stop. So you could run Stop-Service -name BITS to stop the BITS service (I like to play with that one since it isn't really crucial to most of the OS working, but it does make stuff like Windows Update function, so be sure to put it back when you're done playing with it).


If you page down (press Spacebar to advance the help file one screen at a time) to the full explanation of the -name parameter, you'll note two interesting facts: It accepts a as input, and it can accept pipeline input "ByValue." Okay, well, the fact that it accepts a string as its input isn't very surprising. "BITS" is a string of characters, right? Right. But pipeline input? ByValue?

All that means is that, instead of manually specifying the service(s) name(s), you can pipe in that information. Anything you pipe in that is a string will be "attached" to the -name parameter automatically.

'BITS','w32time' | Stop-Service

PowerShell always turns comma-separated lists like that into their own little table, so this one has three rows, and no column header. The values in those rows are, obviously, strings. So PowerShell has to ask itself, "self, what do I do with this pipeline input? Ah - I see that Stop-Service, the next command in the pipe, is willing to accept strings for its -name parameter. I shall send these three rows of strings to the -name parameter." And so the shell will attempt to stop those three services.

Neat, right? ANYPLACE you can get strings from will work. For example, suppose you make a text file containing three service names, with one name per line - exactly as if you were constructing a single-column table of values. You can load that content, which consists of strings, and pipe it to Stop-Service:

Get-Content c:\service-names.txt | Stop-Service

Okay, we said that service name was ONE way Stop-Service could be told which services to stop. The other way is to specify the -inputObject parameter. Now, if you read its detailed help, you can see that it accepts input which is of the type
. Do you know how to produce a ServiceController? It's not as easy as a simple or (integer). Try running Get-Service
Get-Member. At the very start of the Get-Member output, you'll see the "type name" of what Get-Service produced. This bears some explanation:

We've already discussed how commands like Get-Service produce a table of data in memory. Every command produces a different kind of table, and those tables have a specific name that describes them. In the case of Get-Service, the table is called a ServiceController. That simply means that each row of data has a specific, predefined set of columns, associated with a service. PowerShell uses these "type names" to keep track of the different tables of data, so that it knows which columns get displayed by default, for example.

So, Get-Service produces output of the
type. That means this:

Get-Service | Stop-Service

Works because all of the services get "attached" to the -inputObject parameter of Stop-Service. See, it accepts pipeline input ByValue, just like -name does. So when PowerShell sees pipeline input of the
type, the shell knows that it can attach that input to the -inputObject parameter, and Stop-Service uses that parameter to figure out which services you want stopped.

Same Idea, Slightly Trickier

Let's look at something entirely different. I want you to boot up a Windows Server 2008 R2 domain controller virtual machine - if you don't have one, get one. Seriously, this is worth it. I'll wait.

Okay, now open PowerShell on that server. Run Import-Module ActiveDirectory to load in the Active Directory commands, and then run Help New-ADUser -full to review the help. Notice a few things:

•There are parameters for many of the more common AD user attributes, like Department, Title, Organization, Name, City, and so on.

•Only the -name parameter is required, although practically speaking you also have to specify a -samAccountName. Otherwise you'll get a blank samAccountName, which isn't useful.

•Almost all of those attribute-related parameters accept pipeline input "ByPropertyName." This is the slightly-trickier bit we're going to cover right now.

When you pipe a table of information from one command to another, PowerShell first tries to bind all of the table's rows based on their type name, using the ByValue technique described above. If it can't find a parameter that accepts the entire table's type name, then the shell switches to a second mode, called ByPropertyName.

So this for me: Open Notepad, and create a file with the following contents. Save it as "users.csv." Be sure that Notepad doesn't sneak an extra ".txt" filename extension onto the end of the file name!

name,samaccountname,department,city,title,organization

DonJ,DonJ,IT,Las Vegas,CTO,Concentrated Tech

GregS,GregS,Facilities,Denver,Janitor,Concentrated Tech

ChrisG,ChrisG,Administration,Las Vegas,Business Manager,Concentrated Tech

So this is basically a table, right? Three rows, six columns. Now run Import-CSV users.csv and you'll see that PowerShell is able to read the CSV contents and turn all of that into a PowerShell-like table structure, right in memory. It won't display it as a table by default, because there are too many columns. You can force it to: Import-CSV users.csv
Format-Table will force the shell to do its best.

Anyway, now run Import-CSV users.csv
Get-Member. You'll see that the type name is just a PSObject or something similar. So dig through the parameters of New-ADUser. Do you see any parameters that accept a
from the pipeline ByValue?

You do not. So if we do this (don't actually run this yet):

Import-CSV users.csv  | New-ADUser

The shell will try to attach all of those
rows to a parameter of New-ADUser, but it won't find one that suits. So it shifts into ByPropertyName mode. In this mode, it works a bit differently: The shell will attempt to match parameter names to table column headers. When it finds a match, the value from the matching column will be "fed" to the parameter name. It then executes the command once for every single row that was in the input table.

So, we piped in three rows. New-ADUser will basically run three times (not exactly, but that's the effect you'll observe). Each time, every one of our columns will be "attached" to a parameter of New-ADUser simply because our column names match the parameter names. Go ahead and run that command-line, now - you'll see that three users are created in your domain. We didn't specify passwords (there's a parameter to do so, but we didn't use it), so the accounts are created as disabled.

Any "extra" columns - those whose column names don't match up with a New-ADUser parameter - will simply be ignored. So long as we've provided values for all required parameters (we did), then the command will work. We could also specify other parameters for New-ADUser, such as where we want all of those users created, or even an alternate credential to use for the creation task:

Import-CSV users.csv | New-ADUser -credential DOMAIN\Administrator

Those manually-specified parameters will take effect for every new user that gets created, so they're a good way to specify "universal" options like credentials, destination containers, and so on.

I know - this is a little tricky. But I have one more bit of trickiness to show you.

Sometimes, You Have to Do it Like This

The ByValue and ByPropertyName pipeline input tricks are crazy-useful. I mean, that trick with creating new AD users is something that would have taken a few dozen lines of code in VBScript - and we're just running commands. We're not programming!

But sometimes, the pipeline input trick doesn't work, often because Microsoft simply didn't wire up some commands to use pipeline input. Wish they had, but sometimes they forget, so we have to work around it.

Let's take Get-WmiObject as an example. It has a -computerName parameter. I'd love to be able to pipe computer names into Get-WmiObject, like this:

Get-Content names.txt | Get-WmiObject -class Win32_BIOS

But you can't, because Get-WmiObject's -computerName parameter isn't set up for that. So you have two options. Here's the first, which is the one I like the best:

Get-WmiObject -class Win32_BIOS -computername (Get-Content names.txt)

Remember in math class, how (parentheses) were used to control the order of evaluation? Basically, everything inside parentheses gets done first, and then you work your way outward? Same deal in PowerShell. What I've done is forced it to execute Get-Content first, and then feed the results of that to -computerName. So I'm accomplishing what I wanted - reading computer names from the text file - just without using the pipeline.

There's a variation on this which is useful. For example, suppose you want to use Get-ADComputer to get computer names from the directory. You want to get all the computers from the West OU of the company.com domain, and you want Get-WmiObject to get operating system information from each one of those computers. Those computers are represented as in-memory tables, just like everything else in PowerShell. So we have to tell the shell which column we want fed to the -computername parameter. You do it like this:

Get-WmiObject -class Win32_OperatingSystem -computername (Get-ADComputer -filter * -search 'ou=West,dc=company,dc=com' Select -expand Name)

You'd type that all on one line. Here's what's happening:

We've already said that what happens in parentheses stays in parentheses - er, I mean gets executed first. Sorry, I live in Vegas. So the command inside parentheses run first. Get-ADComputer is going to go get all of the computers specified, and then create that in-memory table. I'm piping that table to the Select command, and asking it to just expand out the contents of the Name column, which is obviously where the computer name lives. So the final result is a big list of computer names - which is then jammed into the -computerName parameter of Get-WmiObject. Mission accomplished.

There's a last way, which I really don't like much, but sometimes it's what you have to use. A while back, Quest introduced a set of Active Directory commands for PowerShell. They're free, and they're still available, although a lot of folks prefer Microsoft's commands (which are available in Server 2008 R2) when they can use them. Anyway, back in the day, Quest's commands didn't do very much of the pipeline input thing. They might have changed that recently, but I don't care, because this example needs them to not do very much of the pipeline input thing. So basically, you could run the New-QADUser command and do substantially the same thing as the Microsoft New-ADUser command, which I used above. The trick is that New-QADUser doesn't (or didn't) accept pipeline input ByPropertyName, so you couldn't just do this:

Import-CSV users.csv  |  New-QADUser

What you'd have to do instead is have the shell go through the table of user information one row at a time, and actually execute New-QADUser with manually-specified parameters, once for each row in the table of user information. You can use that $_ placeholder to stick in the information from the table, essentially creating dynamic parameters of a sort. Looks kind of like this:

Import-CSV users.csv | ForEach { New-QADUser -title $_.title -department $_.department -city $_.city }

Now, that's not exactly it because you also have to carry in the account name and whatnot, but hopefully you get the idea. Here's what's happening:

•Import-CSV creates the in-memory table of user information. It sends it to the next command, ForEach.

•ForEach comes with a {script block}, which specifies the command(s) to run. Those will be run one time for each row in the input table. So, if our CSV file had three rows, then New-QADUser will execute three times.

•Each time the {script block} executes, ForEach looks for the $_ placeholder. As before, it refers to the "current row of data." A period lets us then specify a particular column name from that row. So, we've fed the "Title" column to the -title parameter, the "Department" column to the -department parameter, and so forth.

Again, I don't find this to be especially elegant or intuitive - but once you kind of get the hang of it, it solves the problem. Sometimes it's the only way to solve the problem, so you have to kind of know this syntax.

Also, keep in mind that $_ isn't just generally available everywhere in PowerShell - it only works in the specific cases where PowerShell itself is programmed to look for it, and so far I've only show you two of those places. There are a handful of other places, but that's getting into more complex tasks. This is just a beginner's tutorial!

And I think it's time to end it here. We've covered a LOT of ground, actually, and you've seen some amazing techniques. Start finding production reasons to practice them, and get these nailed. You can then start moving on to other techniques - and you'll be surprised at how much you can get done without any programming at all.

All Done

Okay, you obviously haven't learned ALL of PowerShell - but believe it or not, you've learned the major operational patterns and concepts that make PowerShell function. The rest is mainly practice, and of course finding and taking the time to understand examples others (like me) have written (and I hope you'll continue reading this blog for more examples).

So get started.

Also, I'd love your feedback on this little tutorial. Did it clear anything up? Make anything confusing? I'll continue to update it and edit it over time based on the comments YOU leave below. Let me hear from you!