SQLServerPedia Syndication

Dear #SQLPass Board…

Like most PASS members, I got the email yesterday announcing the PASS board elections, and it looks like a great slate of candidates.  To all of them, I want to say thanks for volunteering your time and energy to run.  Three of you will be elected to help shepherd an influential and active community, and you will probably not often hear a sincere “thank you”.  To the current Board of Directors and the Nomination committee that helped put this slate together, I also want to give thanks; your service is much appreciated.  I do have a slight favor to ask, though, and like most Southerners, I’ll illustrate the need with a personal story.

I have a teenage daughter who has recently acquired her driver’s learning permit, which means that like most parents, I have prematurely aged.  I’ve sat in the passenger seat, gripping the “Oh $h!t” handle while screaming “STOP!” at the top of my lungs while simultaneously punching the imaginary brake pedal in front of me.  I’ve reached over and adjusted the steering wheel to help her avoid drifting into the other lane of traffic.  I’ve done the apologetic wave to other drivers when they pull up beside her at a stop light (after riding her bumper at EXACTLY the speed limit for the last five miles); I’ve been less apologetic to the idiots who screamed at her.  I’ve both protected and corrected her, and after a few short months, she’s become a pretty good driver.  We’ve still got some work to do, but now I don’t hesitate to slide her the keys and head for the passenger seat.  Most days I don’t even grab the handle.

As my daughter has matured behind the wheel, our conversations have changed from specific instructions (like “you have to check your mirrors, adjust your seat, and fasten your safety belt”) to more general principles (“do your checks”).  She’s learned that she can ask me questions, and that she can trust me to pay attention.   We communicate about expectations, and if a situation does come up that we haven’t specifically discussed, she can generally predict what my expectations are.  For example, she knows the 3-second rule for following a car, and she knows that she should be more cautious at night, so she can extrapolate that she should use 4 seconds for following a car at night. 

In my mind, PASS is growing out of its awkward adolescence, and the last few election cycles have demonstrated some serious growing pains.  I don’t want to rehash the controversies, but as a community member I do want to remind you that this election cycle is the time for the organization to shine.  I recognize that with every misstep that the Board of Directors has taken action to learn from the experience, and has taken steps to avoid similar situations.  However, an election without controversy has been elusive for the last three years, and I’m asking you to drive this one home.  

How?  Communicate with the community about what the expectations are for the election.  I think you’ve done an OK job with efforts like the elections.sqlpass.org site, but I think you need to go above and beyond this time.  Promote the heck out of it; let the community know that you want to get this election right, and that it’s a priority for PASS.  Make sure that the conversations happen BEFORE the unexpected situations arise.  If something does go awry, then make sure that you can explain what happened and continue to demonstrate that same desire to learn from the experience.

I realize that this note may seem a bit harsh coming from me, since I was intimately involved in at least one of those controversial elections.  As I tell my daughter all the time, I’m partially responsible for her driving mistakes because I’m the one helping her learn.  We’re learning together; I’ve never taught someone to drive before.  That means that we have to be able to discuss what we’re learning, and that we focus on improving the outcomes every time we get in the car.  I expect the same from this election.

What’s your mantra?

I worked really hard to line up the window so I could get that nice illumination look.I’ve been busy lately making the shift from “hands-on engineer” to “hands-off manager”; it’s been tough, because I’m not accustomed to stepping away from a problem and letting someone else figure it out.  Even though I’ve got an education background, and have been (inconsistently) involved in technical communities helping other people solve problems, I’ve never had to balance the responsibility of making sure that a job gets done with the responsibility of allowing someone to “learn from their mistakes”.  It’s harder than I thought.

As part of my management training, however, I’ve been focusing on laying out principles and guidelines rather than specifics; for example, instead of saying something like:

We need to make sure that John Smith has limited access to that database; can you do that for me?

I’m trying to say something like:

Our databases need to be secure, and that includes differentiation in access based on job roles.  Can you verify that all accounts are set up using that model?  If there are exceptions, can you document why?

The idea is that a) I need to trust that the people that work for me know how to do their jobs, and b) I need them to start aligning their practices with the principles that I think should drive our work.  I realize that last statement sounds a bit tyrannical, but in reality, I trust my team to help me develop those principles.  However, it’s my job to provide the direction, and their job to keep the boat afloat.

As part of this exercise, I’ve begun to realize that I’ve had a worldview for several years that I’ve always fallen back on when it comes to solving problems.  I’ve never thought of it as a mantra before, but the truth is that the following statement sums up my approach to most problems.

Simple is best.

When I encounter a challenge, and am forced to figure out a solution, I always fall back on the thought that a simple solution is usually the best one.  When I’m face with deciding between option A and option B, I’m usually going to lean toward the one with the fewest moving parts, even if it doesn’t address every possible outcome.  I’m a big fan of things like the 80/20 rule, and Occam’s Razor; it’s far easier for me to tackle problems in bite-sized chunks.  It’s the first principle by which I evaluate any solution that my team presents; if it’s not simple, it’s not manageable in my opinion.

Not everyone I work with is oriented toward simplicity; one of my best friends (I’ve worked with him for 9 years) often chooses beautiful and thoroughly complex solutions to problems.  I don’t know if could express what his mantra is, but I’d bet it would involve something about slicing and dicing problems into small pieces.   I also work with another guy who loses me in jargon in every discussion; he’s very passionate about writing code that covers every scenario, even if it takes 6 months to a year to do so.  I have a tough time thinking like that.

Mantra’s can be positive or negative.  For example, two of my former managers had different mantras that they relied on; one manager used to say:

Never make a person do what a computer can do for them. 

That helped guide us when making a decision on work priorities; if you’ve got someone cutting-and-pasting information from your application so that they can send an email, and they have to do that on a consistent basis, then you probably need to write some sort of hook into your application to let them use their email program from your app.  If you’re hiring staff because of the uptick in workload, then you should probably figure out ways to reduce the amount of work using computing power.

The other manager (different company; different time in my career) used to chant:

Whoa, whoa, whoa; that’ll delay the project.

which made us all think that that the project was more important than actual details.  It was a joke; the slightest interruption or shift in perspective would tip the entire apple cart.  His inflexibility was legendary, and ultimately led to the loss of his entire staff (before he was eventually sent packing).

What’s your mantra? Do your teammates or employees know it? Does it contribute to the work effort, or has it become a tired joke among your colleagues?

#TSQL2sDay.. For Whom Do You Log?

This month’s T-SQL Tuesday is being hosted by Aaron Nelson [blog | twitter]; since I’m posting this late in the day, I got a chance to sneak a peek at some of the other entries, and most of them were great technical discussions on how to log, what to log, and why you should log.  I’m not feeling technical today; today’s a conceptual day.  So, rather than write about the pros and cons of logging, I thought I would ask you to step back and consider who is your audience?

At my company, we monitor logs for our customers; I’ve had to reverse engineer a bunch of different log formats in my day, and there are some basic principles behind good logging practices; here’s my stab at them:

1. Logging without analysis is useless storage of information.

I realize that our jobs as data professionals have gotten more exciting because of policy requirements that insist upon the storage of all kinds of information for exceedingly long periods of time.  I recently read a requirement that said a company must maintain source code for 20 years; that’s a heckuva long time to keep a log of changes around.  Unfortunately, if no one ever looks at the log, then why store it?  If you’re going to be storing information, you need to have some process that consumes that information in order to get the most value out of it.  Good logging processes assume that someone will be reviewing the log at some point, and using that information to act. 

2. Logging structures should be flexible.

If you are logging information with the mindset that someone will be reviewing the log in the future, then you need to balance precision (i.e, gathering adequate information to describe the logged event) with saturation (don’t over-log; not every event is always important).  For example, if you’re building an audit log to track changes to a customer account, you want to be able to isolate “risky” behavior from normal account maintenance.  If your logs become lengthy result-sets filled with change statements, it’s easy to overlook important events such as a bad command.

Most logging structures attempt to overcome this by having some sort of categorical typing appended to the log; in other words, if we think in tabular terms, the columns of a log dataset might look like:

  • DateOfEvent – datetime
  • Category – Classification of the event (Error, Information, etc)
  • Severity – Some warning of how important the event is
  • ErrorCode – Some short (usually numeric) code that has meaning extrensic to the system
  • Message – The payload; a string description of what happened.

It becomes relatively easy to isolate the error messages from informational messages; however, how do you search non-categorical information with the message itself?  For example, if you want to determine that there was a specific error code associated with a specific logon, and that logon information is embedded in the message of your log file, how do you search it?  The easy answer is to use wildcards, but is there a better way?  In my experience, good logs use some form of intra-message tagging to isolate key elements within the message; the log file remains simple for quick searches, but can easily be adapted for more in-depth searches.  I like XML attributes for payload logging; it’s easy to implement, and can be parsed quickly.  For example:

acct=”Stuart Ainsworth” msg=”Access Denied” object=”SuperSecretPage.aspx”

is an easy message to shred and look for all denied attempts on SuperSecretPage.aspx.  If I wanted to look for all activity by Stuart Ainsworth, I could do that as well.

3.  Logging should have a maintenance plan.

If you log a lot, you know that logs fill up quickly.  How long do you retain your logs (and the answer shouldn’t be “until the drive fills up”)?  Too much information that is easily accessible is both a security risk and a distraction; if you’re trying to find out about a recent transaction by a user, do you really need to know that they’ve been active for the last 10 years?  Also, if your log file does fill up, is your only option to “nuke it” in order to keep it running and collecting new information?

A good logging strategy will have some sort of retention plan, and a method of maintaining older log records separately from new ones.  Look at SQL Server error logs, for example; every time the service starts, a new log file is created.   Is that the best strategy?  I don’t think so, but it does isolate older logs from newer ones.   If you’re designing a logging method, be sure to figure out a way to keep old records separate from new ones; periodically archive your logs, and find a way to focus your resources on the most recent and relevant information.

#SQLSat111 is a wrap…

I’ve tried to do these wrap-up posts over the years to give advice to the upcoming SQLSaturdays based on our experience, but I wanted to do this one a little differently.  Before I go too much further, let me do two things:

  1. State the obvious:  SQLSATURDAY 111  ROCKED!!!!
  2. Thank a whole bunch of people: Audrey Hammonds, Aaron Nelson, Tim Radney, Julie Smith, Rob Volk, Kristina Mishra, Erin Hicks, Lorra Newton, the AtlantaMDF leadership team, and a whole bunch of speakers and volunteers who helped make this show work (far too many to thank here; it was inspiring to see the people who gave their time to make this work). 

This was the second year that I served as a member of the team, rather than trying to pull it off.  Audrey did an amazing job of pulling everything together.   We may have had a few bumpy spots along the way, but from all of the feedback we got, the event ran extremely smoothly for the attendees (all 460 out of the 650 registered).  However, as I was reviewing the twitter feed, one tweet in particular stood out for me:

image

This is what it’s all about; an attendee at our event left wanting more.  It made me think about the nature of this post; I usually write up some practical advice on HOW to do something (and lessons learned).  This time I wanted to focus on WHY you should do something.  I guess this is my attempt at inspirational writing, so breathe deep, assume the lotus position, and read on.

First, it’s the people.

SQLSaturday’s are a big party, and there’s enough of them going on around the country world now that we’re starting to become a traveling band of gypsies.  Many of the people who speak at these events speak at a whole bunch of events, and this becomes a little family reunion at every event.  I love that, but what I loved even more was the fact that I got to see a bunch of my local database people “get their learn on”.  It’s really easy to get caught up in the moment of working at the event, but at the end of the day, the point of this event was that you should inspire somebody to learn something new, to change the way they approach a problem.

Second, you are people.

If you’re hosting a SQLSaturday, don’t neglect yourself.  I went into this event tired, grumpy, and a little worried because I knew there were some last minute issues that I had neglected.  Guess what; nobody cared.  The party rolled on.  I spent a great deal of time running around trying to make sure that I touched base with people, and I didn’t attend a single session (other than my own).  That was a mistake.  I should have totally taken advantage of the great training opportunities that were there, and learned something myself.

I’m pointing the finger square at myself on this: I need to invest more in me.  Not that I should neglect others, but I’ve neglected studying and learning because I’ve let other things take away my time.  When I was a kid, I used to always hear the mantra of “Faith, Family, and Work”; I still believe that, and for the most part, I’ve done that.  What I’ve forgotten is that Learning=Work!  If I’m not investing in my own education, and not investing in putting my own ideas to digital paper, I’m starving the creative process, and thus starving my own career.

To that end, I guess the person that left this event the most inspired was me.   If you’re thinking about hosting one of these events, don’t forget to learn something yourself.

#SQLSat111 “Biggish Data”

Trying not to be to obnoxious about this, but I’m presenting on “Biggish Data” at SQL Saturday 111 in Atlanta tomorrow; if you’re there, swing by and see me.  I presented this on Monday at AtlantaMDF, and am attempting to incorporate the feedback from that presentation into this one.  Should be good, but I’m still working on the slides.

However, if you attend my session, you’ll get a free koozie!

koozie

#TSQL2sDay 28–Jack of All Trades, Master of None

This month’s topic was chosen by Argenis Fernandez (blog | twitter): "Jack of All Trades, Master of None?".  It’s interesting to me lately because I seem to keep encountering blog posts and articles that ask some variant of this question.  I’m not sure if it’s spurred in part by the recent release of SQL Server 2012 (which is going to change a lot of the ways we manage data), or if it’s something in the collective waters bubbling under the Internet.  Or perhaps, it’s just the fact that I’m trying to define myself in a new role at my work.  Don’t know. 

I’ve always been a data person; one of my first jobs in IT was working for a forestry agent doing data entry; my job was to take the handwritten note cards from the tree counter, translate their coding system into digits, and enter it into an old Macintosh computer (this was the late 80’s, and Macs were super sophisticated at the time).  These numbers were important; they represented an estimate of how much timber (and what kind) was on a property, so that the value of that property could be accurately assessed.  It had nothing to do with SQL; wasn’t really even any sort of programming.  Just look at a card, mentally transform the code into a number and tree type, and punch it in.

Later, in college, I got involved in doing social science research; this eventually led to a position as in public health & communication research.  Still digging on numbers; I was using SPSS at the time.  I went to bed at night dreaming of statistical relationships and discovering new stuff.  When our department encountered resistance from the IT department for managing one our studies, I started hacking into Microsoft Access to build a contact management platform.  This was a transition point for me; I went from entering numbers to evaluating numbers to finally programming.  However, it was always data.

I made a career change and dove into true IT work shortly after that; my job title said “Access Report Developer”, but I became the DBA (because there wasn’t one).  I loved SQL 2000, but when the IT department at that company shrank, I became not only a programmer, a report developer, and the DBA, but also the guy that fixed printers.  When I had an opportunity to jump ship to another company to become strictly a DBA, I took it.

I’ve been here for 9 years, now.  In that 9 years, I’ve been the DBA, a database developer, a data architect, and now a manager.  Although I’ve primarily worked with SQL Server, I’ve dabbled in mySQL, XML and Visual Basic.  I’ve looked at backup jobs and maintenance plans, DTS and SSIS (and other solutions); my T-SQL is strong.  I used to think I was a specialist in database development, but I’ve come to realize that there’s people that are a lot more specialized than me.  I’ve also come to realize that data without context is meaningless.

I guess I’m growing up a little; when I was kid counting trees, I didn’t really care about how efficient the system was.  I was just earning spending money at a job that was better than mowing the lawn.   When I was doing research, I realized how important my work was, but even then there was little that I could do to change long-term outcomes; I was just an analyst.  In IT, I could finally get to a point where I could take the numbers that I saw and do something about it.  I’m just not collecting data for a reason; I’m using that data to help make decisions that make my company more profitable (which ultimately leads to more money in my pocket). I collect, I analyze, I interpret, and I react.

So what does this little trip down my history mean in terms of the post topic? For me it means that I’ve got a niche, but it’s a pretty big one.  Most of work is done with a certain tool, but the skills I’ve learned along the way don’t restrict me to that tool.  I’ve worked with people who get the specifics of the platform a lot better than I do, but miss the big picture; I’ve also worked with visionaries who get lost when I discuss normalization and efficiencies.   I’m neither a Jack of All Trades, nor a Master of just One.  I’m just a data guy, and I like it.

Agile Database Administration

Last night, I had the pleasure of presenting at the Greenville SSIG; my topic was Agile Database Administration, and it was a presentation that I had pulled together at the last minute.  It still needs some refinement, but overall I thought it went pretty well.  However, I was surprised at a couple of things:

1.  Out of my (admittedly) small sample of attendees, only about 2% had encountered Agile methodologies (like Scrum) in their work, and

2.  Even fewer of them understood the challenges that Agile is supposed to solve.

I decided that in order to improve the presentation, I would try to do a little more research; since I own a blog, this seems like the perfect place to synthesize some of my thoughts on the subject.  As I did in my presentation, I’ll try to skew the discussion toward data people, but there may be some challenges along the way (since much of Agile focuses on management of development tasks, not administration or service-oriented tasks).

Begin at the beginning….

The Agile movement begins with the Agile Manifesto, a short statement of belief written by 17 software developers during a weekend at a resort in Utah.  The entire manifesto reads as follows:

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.

There are twelve basic principles associated with the manifesto; I’ll tackle those at a later post.  The concept of the manifesto is clear, however, in that the goal is to manage tasks through a focus on relationships and the business environment as opposed to strictly moving from point A to point B.

As a sidebar, I do find it quite funny that the Agile Manifesto webpage appears as if it was thrown together after a weekend of drinking and philosophical conversations at a ski resort; it hasn’t changed in 12 years.  I’m not sure if that’s intended as a silent testament to the longevity of agile code or not.

It should be noted that many agile methods actually began before the manifesto; the manifesto was not intended to spawn new methodologies, but rather to provide a common framework for several existing methods.  It’s a philosophy, not a method; you have to think agile before you can be agile (I’ll pause for a minute to let that sink in… ready to move on?)

Let me try to define the agile movement by what it is not:

  • Agile does not mean a particular method of development; rather, it’s an umbrella for several different methods.
  • Agile is not represented by a set of tools; but you should use tools in an agile way.
  • Agile is not an excuse for sloppy work; instead, it’s an attempt to focus on quality throughout development.
  • Agile is not a solution for every organization; the agile philosophy represents a change in process which must be endorsed by the organization in order for it to be successful.

Jumping the candlestick…

The term agile means “nimble”; it’s the ability to quickly respond to a changing environment.  Most agile adherents recognize that processes such as lengthy requirement gathering sessions can cripple the development process; if your organization spends a lot of time defining what it needs up front, the business environment can change before you get it developed.

But what about database administrators?  Most admins I know have mild cases of ADHD; database professionals are the master of change.  Need a query to see what’s happened over the last year?  No problem.  Backups not working? We’ll look into it.  Server overloaded?  We’ll deal with it.  Printer not working?  Not my department, but sure.  Developers too slow in getting a report to you?  I can do it.   We’re already agile, right?

Wrong. To be agile doesn’t mean you leap over one flame to another; it’s finding a method of getting things done that is responsive to business changes while solving business problems.  If you’re constantly hopping from one issue to the next without figuring out how to keep the fires from occurring, you’re eventually going to get burned (see what I did there?).  If your servers can’t handle the queries your users are throwing at it, you can tune the queries but you may need to look at a hardware solution.  If your boss keeps asking you for the latest numbers, it may be faster to run the query for her, but writing a report (or showing her how to write a report) will minimize the long-term problem.

Thinking agile means thinking strategically; be nimble (solve the immediate problem), but be clever (address the root cause).

Where do we go from here?

I hope to start pulling together a few blog posts on various agile methods and how they relate to the agile philosophy over the next few weeks; specifically, I hope to talk about how the agile philosophy is influencing my database administration management style.  Until then, here’s some links for you to consider:

http://en.wikipedia.org/wiki/Agile_software_development

http://www.agilemanifesto.org/ (Be sure to read the principles on the next page).

http://www.agiledata.org/essays/dbaSkills.html (lots of material here; I need to revisit it

Presenting at Greenville SSIG on Tuesday, March 6, 2012

Just a quick note: I’m presenting next Tuesday at the Greenville SSIG. Kind of a last minute deal, I’m still writing the presentation in fact. Details below:

Please register at http://ssig201203.eventbrite.com/ . If you forget to register, and want to come to the meeting, please do.

6:00 – 6:30 Food and Networking Sponsored by FGP
6:30 – 6:45 Announcements
6:45 – 8:15 Stuart R Ainsworth: Agile Database Administration
8:15 – 8:30 Closing remarks and give-a-ways.

Agile Database Administration
Agile development is all the rage, but how do the principles apply to database administrators? This presentation will introduce the basics of the Agile Manifesto, and explain how they can be applied to non-development IT work, such as database administration, maintenance, and support. We’ll cover scrum (one of the most popular development methodologies) and kanban, and identify some of the common struggles with implementing them in an organization. This is an interactive discussion; please bring your tales of success and your horror stories.

Stuart R. Ainsworth is a database architect and a manager of database administration for a financial security company. He’s also a chapter leader for AtlantaMDF (the Atlanta SQL Server User Group), and blogs (infrequently, but hopes to do better) at http://codegumbo.com/.

Stuff in the FROM Clause: Interfaces

IMG_0323[1]Finally wrapping up this blog series (while standing at my mobile office waiting on my daughter to finish track practice); I’ve previously discussed four different categories of database objects that can be used as the source of various SQL statements:

Base Structures

Virtual Structures

Temporary & Semi-Temporary structures

I want to wrap up with the following category: Interfaces.  Interfaces are methods by which queries can be executed on a remote server (and I use that term to mean remote to SQL Server, not necessarily remote to the host). and the results returned for consumption by a SQL statement.   There are basically three interfaces supported by SQL Server.

OPENDATASOURCE

OPENDATASOURCE is an ad-hoc method of connecting to any OLEDB provider; below is a bit of sample code from MSDN:

SELECT *
FROM OPENDATASOURCE(‘SQLNCLI’, ‘Data Source=London\Payroll;Integrated Security=SSPI’) .AdventureWorks2008R2.HumanResources.Employee

If you’re familiar at all with server naming notations for SQL Server, you can easily identify the four-part schema above (Server.Database.Schema.Object).  Note that OPENDATASOURCE is intended for limited use; heavy connections should consider using linked servers to take advantage of security management (particularly in the case of SQL Authentication).

OPENROWSET

OPENROWSET is similar to OPENDATASOURCE; it’s an ad-hoc method of calling data from an OLEDB provider.  The primary difference is that OPENROWSET supports a querystring, as well as pulling data from a single object.  This allows you a great deal of flexibility because you execute complicated queries using remote resources, and then only return the data that you need.  OPENROWSET can also be used for BULK insertion of data

OPENQUERY

OPENQUERY is intended for use with linked servers; the purpose is to allow code to be executed on a remote server first, and then the affected result sets being used on the local server.  In order to use the OPENQUERY method, you must first define a linked server, and then use a query string to do the actual heavy lifting on the other box.

Interfaces are relatively easy to understand and use, but they need to be carefully evaluated for each situation.  For example, OPENDATASOURCE allows you to quickly retrieve data from a single object on a remote server, but that means that all of the data is retrieved to the local instance.  If the remote server is across a network, then you have to deal with network latency.  OPENROWSET and OPENQUERY provide a work-around (through the use of a querystring), but the querystring can not be parameterized, so it’s a bit clunky to work with.  However, for quick and dirty movement of data from within T-SQL, interfaces can be a valuable tool at the right time.

Stuff in the FROM clause: Temporary & Semi-Temporary structures

So way back in October, I started a blog series on Stuff in the FROM clause; I never finished it.  I’m trying to return to writing, so I thought it would be best if I completed the remaining two posts (so I can tick one more item off my to-do list).  If you really want to catch up, here’s the links to the first two posts:

Stuff in the FROM clause: Base Structures

Stuff in the FROM clause: Virtual Structures

Today’s post is touching on two different categories of database objects used in the FROM clause: temporary and semi-temporary structures.  Temporary structures are database objects which are used within the context of a single scope or session in SQL Server; after their use, they are destroyed.  Semi-temporary structures can be used across multiple scopes or sessions, but they are destroyed when the server is restarted.

TEMPORARY STRUCTURES

INLINE SUBQUERIES

An inline subquery is much like a view; it’s a method of encapsulating a SELECT statement inside another piece of SQL code for performance gains or simplification of the outer SQL statement.  Inline subqueries exist only within the context of the outer SQL statement; they are not reusable from one statement to the next.

SELECT columnlist
FROM (SELECT columnlist
           FROM table) a

Inline subqueries must have be referenced by an alias (in the above example, “a” is the alias).  They can be used like any other object in the FROM clause.

COMMON TABLE EXPRESSIONS

A common table expression (CTE) is similar to an inline subquery; it’s a method of encapsulating a SELECT statement for reuse within another SQL statement; they are not reusable from one statement to the next.

; WITH a AS (SELECT columnlist FROM table)
SELECT columnlist
FROM a

CTE’s are very powerful piece of coding logic that deserves far more attention than this brief write-up; below are a couple of examples for additional reading (including how to do recursion in a CTE):

http://datachix.com/2010/02/10/use-a-common-table-expression-and-the-row_number-function-to-eliminate-duplicate-rows-3/

http://msdn.microsoft.com/en-us/library/ms186243.aspx

TABLE VARIABLES

Table Variables are another powerful temporary structure for managing data in SQL Server; a table variable is destroyed when it falls out of scope, which is more limited than the session of a temp table.  Although there might be some slight performance gains associated with using small table variables (since they are created in memory first), the real benefit stems form their use as building blocks inside user-defined functions and table valued parameters (both to be introduced soon).  The syntax for building a table variable is as follows:

DECLARE @t TABLE (ID int)

A note of caution: table variables are expected to be small (limited rows) by the SQL Server optimizer; execution plans may be negatively impacted if the amount of data being stored in a table variable is larger than x rows (and estimates for x varies greatly from 10 rows to 1000 rows).  Note that table variables can only have one index; a clustered primary key that is created on request during the declaration.

TABLE-VALUED PARAMETERS

Table-valued parameters are related to table variables; they are primarily used to pass entire result sets from one stored procedure to the next.  To be honest, I find the syntax a bit clunky, so I’m borrowing the sample code from Books Online to explain it:


USE AdventureWorks2008R2;
GO

/* Create a table type. */
CREATE TYPE LocationTableType AS TABLE
( LocationName VARCHAR(50)
,
CostRate INT );
GO

/* Create a procedure to receive data for the table-valued parameter. */
CREATE PROCEDURE usp_InsertProductionLocation
   
@TVP LocationTableType READONLY
   
AS
    SET NOCOUNT ON
    INSERT INTO
[AdventureWorks2008R2].[Production].[Location]
          
([Name]
          
,[CostRate]
          
,[Availability]
          
,[ModifiedDate])
       
SELECT *, 0, GETDATE()
       
FROM  @TVP;
       
GO

/* Declare a variable that references the type. */
DECLARE @LocationTVP
AS LocationTableType;

/* Add data to the table variable. */
INSERT INTO @LocationTVP (LocationName, CostRate)
   
SELECT [Name], 0.00
   
FROM
   
[AdventureWorks2008R2].[Person].[StateProvince];

/* Pass the table variable data to a stored procedure. */
EXEC usp_InsertProductionLocation @LocationTVP;
GO

 

Basically, you declare a type to hold your result set, and then create a variable to reference that result set; since types are scope- and session-safe, you can then move data around as a variable (as opposed to using a global temporary table).

LOCAL TEMP TABLEs

Local Temp Tables are a temporary structure; they are stored in tempdb, and look and act very similarly to base tables.  You can create indexes on them, you can ALTER their structure after creation, and you can use them exactly like a standard base table.  The primary difference between a local temp table and a standard table is that the temp table is available only to the session in which it was created, and it’s destroyed after that session is over. 

Syntactically, the CREATE statement for a local temp table is nearly identical to the CREATE statement for a base table; the only difference is that temp tables must have a single hash mark at the beginning of their name, i.e.:

CREATE TABLE #temp (ID int)

or

SELECT 1 as ID
INTO #temp

SEMI-TEMPORARY STRUCTURES

GLOBAL TEMP TABLEs

Global Temp Tables are a variation of  the local temp table; the same basic rules apply, except that global temp tables are not automatically destroyed at the end of a session.  Thus, they can be reused across multiple scopes and sessions just like a base table; however, if the server is restarted, the global temp tables are destroyed. 

Syntactically, the CREATE statement for a local temp table is nearly identical to the CREATE statement for a base table; the only difference is that global temp tables must have two hash marks at the beginning of their name, i.e.:

CREATE TABLE ##temp (ID int)

or

SELECT 1 as ID
INTO ##temp

 

Alright; one more to go: Interfaces.