SQLServerPedia Syndication

#SQLPASS–Who’s Making It Rain?

 

As promised in my previous post (#SQLPASS–Good people, bad behavior…), I’d like to start diving in to some of the controversies that have cropped up in the last year and critically analyze what I consider to be “bad decisions”.  This first one is complex, so let me try to sum up the players involved first (with yet another post to follow about the actual decision).  Please note that I am NOT a fan of conspiracy theories (no evil masterminds plotting to rule SQL Server community), so I’m trying to avoid inferring too much about motive, and instead focusing on observable events.

A lot of the hubbub over the last couple of weeks about the Professional Association for SQL Server wasn’t just about the election or the password controversy, but about the decision to become simply PASS in all marketing materials (gonna need a new hashtag for twitter). So much controversy, in fact, that Tom LaRock, current Board President, wrote an excellent blog post about building a bigger umbrella for Mike.  I applaud Tom for doing this; it’s a vision, and that’s a great thing to have.  However, I wanted to take this metaphor, and turn it on its side; if we need umbrellas, then who’s making it rain?  Let’s take a look at the pieces of the puzzle.

 

Community as Commodity

To figure out the rainmakers, we need to define what the value of the Professional Association for SQL Server is.  If you’re reading this post, I bet you can look in a mirror and figure it out.  It’s you.  Your passion, your excitement, your interest in connecting and learning about SQL Server is the commodity provided by the organization.  We (the community) have reached a certain maturity in our growth as a commodity; we recruit new members through our enthusiasm, and we contribute a lot of free material to the knowledge base for SQL Server.  At this point, it’s far easier to grow our ranks than it would be to start over.   

However, the question I would ask is: what do YOU get out of membership?  For most of us, it’s low-to-no cost training (most of which is provided by other community members).   The association provides a conduit to connect us.   The value to you increases when you grow. Exposure to new ideas, new topics, a deeper understanding of the technology you use; all of these are fuel for growth.  In short, as individuals, community members profit most from DEPTH of knowledge.

The more active you are in the community, the more likely you’ll be able to forage out valuable insight; how many of you are active in the Professional Association of SQL Server?   According to this tweet from the official twitter account, 11,305 people have active profiles with the organization.  While that’s not a great metric for monitoring knowledge seekers, it does provide some baseline of measure for people who care enough to change their profiles when prompted. 

 

Microsoft Needs A New Storm

The Professional Association for SQL Server was founded to build a community of database professionals with an interest in learning more about Microsoft SQL Server; the founding members of the organization were Microsoft and Computer Associates, who obviously saw the commodity in building a community of people excited about SQL Server.  The more knowledge about SQL Server in the wild, the more likely that software licenses and training will increase.  Giving away training and knowledge for a lost cost yields great dividends in the end.

This is not a bad thing at all; it’s exciting to have a vendor that gives away free stuff like training.  However, it appears that Microsoft is making a slight shift away from a focus on SQL Server.  What makes me think this?

  • It’s getting cloudy (boy, I could stretch this rain metaphor): software as a service (including SQL as a service) is a lot more profitable in the long run than software licensing.  By focusing more on cloud services (Azure), Microsoft is positioning itself as a low-to-no administration provider.  
  • Electricity (Power BIQuery): Microsoft is focusing pretty heavily on the presentation layer of traditional business intelligence, and touting how simple it is to access and analyze data from anywhere in Excel “databases”.  Who needs SQL Server when your data is drag-and-drop
  • The rebranding of SQL Server Parallel Data Warehouse: Data warehouse sounds like a database; Analytics Platform System sounds sexier, implying that your data structures are irrelevant.  Focus on what you want to do, not how to do it.

The challenge that Microsoft faces is that is has access to a commodity of SQL Server enthusiasts who don’t exactly fit the model of software-as-a-service; those of us that are comfortable with SQL Server on premise haven’t exactly made the leap to the cloud.  Also, many DBA’s dabble in Excel; they’re not Analytics practitioners.  In short, Microsoft has Joe DBA, but is looking for Mike Rosoft (see what I did there?), the Business Analyst.  Mike uses Microsoft tools to do things with data, not necessarily databases.  The problem?  Mike doesn’t have a home.   In order to maximize profits, Microsoft needs to invest in the growth of a larger and more diverse commodity.  In short, Microsoft wants a BROADER audience, but they want them to be excited and passionate about their technology.

Rain Dancing With C&C

The Professional Association for SQL Server has been managed by Christianson & Company since 2007.  While the Professional Association for SQL Server Board of Directors is made up of community volunteers, C&C is a growing corporation with the traditional goal of any good for-profit company: to make money.  How does C&C make money? They grow and sell a commodity.  If the Professional Association for SQL Server grows as an organization, C&C’s management of a larger commodity increases in value.   As far as I can tell, the Professional Association for SQL Server is C&C’s only client that is managed in this way.

The community gets free/low-cost training; C&C helps manage that training while diverting the cost to other players (i.e., Microsoft and other sponsors).  If Microsoft is looking for a broader commodity, C&C will be most successful if they can serve that BROADER audience.   The Professional Association for SQL Server’s website claims to serve a membership of 100,000+; that number includes every email address that has ever been used to register for any form of training from the association, including SQLSaturday’s, 24HOP, and Summit.  Bigger numbers means increased value when trying to build a bigger umbrella.

Yet, this 100,000+ membership is rarely reflected in anything other than marketing material.  Only 11,305 of them are eligible to vote; less (1,570) actually voted in the last election.  5,000 members are estimated to attend Summit 2014.  Perhaps the biggest measure of activity is the number of attendees at SQLSaturdays (18,362).  Any way you slice it, it seems to me that the number of people that are actively seeking DEEPER interactions are far fewer than the BROAD spectrum presented as members.  Furthermore, it would seem that reaching more than 100,000 members is challenging; if only 11,000 members are active in the community, and they’re the ones recruiting new members, how do you keep growing?  You reach out to a different audience.

 

Summary

I feel like it’s important to understand the commercial aspect of community building.  In short:

  • Microsoft needs to reach a broader audience by shifting focus from databases to simply data;
  • Christianson & Company will be able to grow as a company if they can help the Professional Association for SQL Server grow as a commodity;
  • The community has reached critical mass; it’s far easier to add to our community than it would be to build a new one.
  • The association has reached several members of the community (100,000+); far fewer of them are active  (11,305).

Where am I going with this?  That’s coming up in my next post.  While I don’t deny the altruism in the decision by the Board of Directors to reach out to a broader audience, I also think we (the commodity) should understand the financial benefits of building a bigger umbrella.

#SQLPASS–Good people, bad behavior…

I’ve written and rewritten this post in my mind 100 times over the last couple of weeks, and I still don’t think it’s right.  However, I feel the need to speak up on the recent controversies brewing with the Professional Association for SQL Server’s BoD.  Frankly, as I’ve read most of the comments and discussions regarding the recent controversies (over the change in name, the election communication issues, and the password issues), my mind keeps wandering back to my time on the NomCom.

In 2010, when I served on the NomCom, I was excited to contribute to the electoral process; that excitement turned to panic and self-justification when I took a stance on the defensive side of a very unpopular decision.  I’m not trying to drag up a dead horse (mixed metaphor, I know), but I started out standing in a spot that I still believe is right:

The volunteers for the Professional Association of SQL Server serve with integrity.

Our volunteers act with best intentions, even when the outcomes of their decisions don’t sit well with the community at large.  However, we humans are often flawed in our fundamental attributions. When WE make a mistake, it’s because of the situation we are in; when somebody else makes a mistake, we tend to blame them.  We need to move past that, and start questioning decisions while empathizing with the people making those decisions.

In my case, feeling defensive as I read comments about “the NomCom’s lack of integrity” and conspiracy theories about the BoD influencing our decision, I moved from defending good people to defending a bad decision.  This is probably the first time that I’ve publically admitted this, but I believe that we in the NomCom made a mistake; I think that Steve Jones would have probably made a good Director.  Our intention was good, but something was flawed in our process.

However, this blog post is NOT about 2010; it’s about now.  I’ve watched as the Board of Directors continue to make bad decisions (IMO; separate blog forthcoming about decisions I think are bad ones), and some people have questioned their professionalism.  Others have expressed anger, while some suggest that we should put it all behind us and come together.  All of these responses are healthy as long as they separate the decisions made from the people making them, and that we figure out ways to make positive changes.  Good people make mistakes; good people talk about behaviors, and work to address them.

So, how do we work to address them?  The first step is admitting that there’s a problem, and it’s not the people.  Why am I convinced that it’s not the people?  Because every year we elect new people to the board, and every year there’s some fresh controversy brewing.  Changing who gets elected to the board doesn’t seem to seem to stimulate transparency or proactive communication with the community (two of the biggest issues with the Professional Association for SQL Server’s BoD).  In short, the system is not malleable enough to be influenced by good people.

I don’t really have a way to sum this post up; I wish I did.  All I know is that I’m inspired by the people who want change, and it saddens me that change seems to be stunted regardless of who gets elected.  Something’s gotta give at some point.

*************
Addendum: you may have noticed that I use the organization’s full legal name when referring to the Professional Association for SQL Server.  Think of it as my own (admittedly petty) response to the “we’re changing the name, but keeping our focus” decision.

What I Wished I’d Known Sooner As A DBA

Mike Walsh has put together an excellent topic for discussion, and I’ve read several responses to this; all of them are great, but most them are a little optimistic for my experiences.  I like to think that most of my friends and peers in the #sqlfamily are happy people, so it’s understandable that their guidance is gentle and well-intentioned.  Me? I’m happy.  I’m also pretty suspicious of other people, so without further ado, the 4 dark truths I wished I’d known sooner.

  1. Some people are out to get you.  Call it insecurity or good intentions gone bad, but some of your coworkers can’t take responsibility for their own actions and look to blame others.  If you’re moderately successful at your job, there’s probably at least one person that is jealous of your success and is looking for ways to bring you down.  Most people aren’t like this, and the trick is learning who’s a friend and who’s not.  Spend time with friends, and defend yourself against enemies.
  2. The best solution isn’t always the best solution.  Engineers love to SOLVE problems; we don’t just like to get close to an answer.  We want to beat it down.  Unfortunately, in the real world, perfect is the enemy of the good; solutions that are exhaustive and comprehensive on paper are usually time-sucks to implement. Don’t get so wed to a solution that you overlook the cost of implementation.
  3.  At some point, someone will judge your work and it won’t pass the bar.  It’s really easy to pick apart bad code from a vendor (or perhaps from your enemy from point 1 above); it’s hard to make that some sort of critical judgment about your own code.  However, if you’re not making mistakes today, then you’ve got nowhere to grow tomorrow.  Take it easy on other people, point out the flaws in a constructive fashion, and hope that somebody does the same to you someday.
  4. The customer isn’t always right, but they always think they are.  I’ve had customers argue with me for days about something that I could demonstrate was 100% wrong; it doesn’t matter, and at the end of the day the relationship with them as a customer was irreconcilable because of the argument rather than the initial facts.  I’m not saying that you should capitulate to every whim of the customer; however, it’s less important to be right than it is to build a relationship.  Relationships are built on truth and giving a little.  Compromise and move on (or decide that it is better to move apart).

Managing a Technical Team: Building Better

Heard a great podcast the other day from the team at Manager Tools, entitled “THE Development Question”.  I’m sorry to say that I can’t find it on their website, but it did show up in Podcast Addict for me, so hopefully you can pick it up and give it a listen.  I’ll sum up the gist here, but it’s really intended to be a starting point for this blog post.  In essence, Manager Tools says that when a direct approaches you (the manager) with a question, one of the best responses you can offer is another question:

“What do you think we should do?”

Their point is not that management is a game of Questions Only, but that leaders want to develop others and development comes through actions; employees have lots of reasons for asking questions, but a good manager should realize that employees need to be empowered and able to take action for most situations.  If an employee is constantly waiting on approval from the manager, then the manager becomes the bottleneck.

Mulling on this a couple days made me realize that there’s a potential hazard for most new technical managers related to the issue of employee development; are we doing enough to make our employees better engineers than we were?  Let me walk you through my thinking:

  1. Most new technical managers were promoted to their position from within their company, and it was usually because they were the best operator (i.e, someone who was skilled at their job as an engineer).
  2. Most new technical managers have a tough time separating themselves from their prior responsibilities, particularly if those prior responsibilities were very hands-on with a product\service\effort that’s still in use today (e.g., as a developer, John wrote most of the code for the current application; as a manager, John still finds himself supporting that code).
  3. If you were the best at what you did, that means that the people you now manage weren’t.  Actual skill level is debatable, but most of us take a lot of pride in what we do.  Pride can overemphasize our own accomplishments, while downplaying the accomplishments of others.

This is a problem for technical management, because the goal of a good manager is NOT to solve problems, but rather to increase efficiency.   Efficiency is best achieved by distribution; in other words, you as a technical manager could learn how to improve your own technical skills by 10%, but if your employees don’t grow, your team’s not really making progress.  On the other hand, if you invest in your directs’ growth and each f them improves their technical skills by 10%, it’s a bigger bang for your buck (unless you only have one employee; if that’s the case, polish your resume).

Here’s the kicker: Sacrificing your technical skills while building the skills of your employees will pay off more in the long run than continuing to build your own technical knowledge alone.  You WANT your employees to be better engineers than you were because you gain the advantage of the increased skills that brings to the table distributed and magnified by the number of employees you have.   I’m not saying that you should completely give up your passion for technology; it’s helpful for managers to understand the challenges their employee’s face without necessarily being an expert (that’s a fundamental principle of Lean Thinking; “go see” management).  However, you should strive to be the least technical person on your team by encouraging the growth of the rest of your team.

So let me ask you: “What are you doing to develop your employees today?”

#SQLServer – Where does my index live?

Today, I got asked by one of my DBA’s about a recently deployed database that seemed to have a lot of filegroups with only a few tables.  He wanted to verify that one of the tables was correctly partition-aligned, as well as learn where all of the indexes for these tables were stored.  After a quick search of the Internets, I was able to fashion the following script to help.  The script below will find every index on every user table in a database, and then determine if it’s partitioned or not.  If it’s partitioned, the scheme name is returned; if not, the filegroup name.  The final column provides an XML list of filegroups (because schemes can span multiple filegroups) and file locations (because filegroups can span multiple files).

 WITH C AS ( SELECT ps.data_space_id
, f.name
, d.physical_name
FROM sys.filegroups f
JOIN sys.database_files d ON d.data_space_id = f.data_space_id
JOIN sys.destination_data_spaces dds ON dds.data_space_id = f.data_space_id
JOIN sys.partition_schemes ps ON ps.data_space_id = dds.partition_scheme_id
UNION
SELECT f.data_space_id
, f.name
, d.physical_name
FROM sys.filegroups f
JOIN sys.database_files d ON d.data_space_id = f.data_space_id
)
SELECT [ObjectName] = OBJECT_NAME(i.[object_id])
, [IndexID] = i.[index_id]
, [IndexName] = i.[name]
, [IndexType] = i.[type_desc]
, [Partitioned] = CASE WHEN ps.data_space_id IS NULL THEN 'No'
ELSE 'Yes'
END
, [StorageName] = ISNULL(ps.name, f.name)
, [FileGroupPaths] = CAST(( SELECT name AS "FileGroup"
, physical_name AS "DatabaseFile"
FROM C
WHERE i.data_space_id = c.data_space_id
FOR
XML PATH('')
) AS XML)
FROM [sys].[indexes] i
LEFT JOIN sys.partition_schemes ps ON ps.data_space_id = i.data_space_id
LEFT JOIN sys.filegroups f ON f.data_space_id = i.data_space_id
WHERE OBJECTPROPERTY(i.[object_id], 'IsUserTable') = 1
ORDER BY [ObjectName], [IndexName] 

Hadoop for the SQL Server DBA – Initial Challenges

I’ve been intrigued by the whole concept of Big Data lately, and have started actually presenting a couple of different sessions on it (one of which was accepted for PASS Summit 2014).  Seems only right that I should actually *gasp* blog about some of the concepts in order to firm up some of my explanations.  Getting started with Hadoop can be quite daunting, especially if you’re used to relational databases (especially the commercial ones); I hope that this series of posts can help clear up some of the mystery for the administrative side of the house.  Before we dive in, I think it’s only fair to lay out some of the initial challenges with discussing Big Data in general, and Hadoop specifically.  Depending on your background, some of these may be more challenging than others.

Rapid Evolution

Welcome to the wild, wild west.  If you come from a commercial database background (like SQL Server), you’re probably accustomed to a mature product.  For Microsoft SQL Server, a new version gets released on what appears to be a 2-4 year schedule (SQL 2005 -> 2008 -> 2012 -> 2014); of course, there’s always the debate as to what constitutes a major release (2008 R2?), but in general, the core product gets shipped with new functionality, and there’s some time before additional new functionality is released.

Hadoop’s approach to the release cycle is much looser; in 2014 alone, there have been two “major” releases with new features and functionality included.  Development for the Hadoop engine is distributed, so the release and packaging of new functions may vary within the ecosystem (more on that in a bit).  For developers, this is exciting; for admins, this is scary.   Depending on how acceptable change is within your operational department, the concept of rolling out an upgraded database engine every 3-4 months may be daunting.

Ecosystems, not products

Hadoop is an open-source product, so if you’re experienced with other open-source products like Linux, you probably already understand what that means; open-source licensing means that vendors can package the core product into their product, as long as they allow open access to the final package.  This usually means that commercial providers will either bundle an open-source product with their own proprietary side-by-side software (“we interface with MySQL” or “we run on Linux”), or they release their modified version of the software in a completely open fashion and earn revenue from a support contract (e.g., Red Hat).  In either case, it’s an ecosystem, not a canned product.

Hadoop technically consists of four modules:

  • Hadoop Common: The common utilities that support the other Hadoop modules.
  • Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
  • Hadoop YARN: A framework for job scheduling and cluster resource management.
  • Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

However, take a look at the following framework from Hortonworks (the Microsoft partner for Hadoop):

hortonworks Lots of stuff in there that’s being developed, but isn’t officially Hadoop.  it could become part of this official stack at some point, or it may not.  Other vendors may adopt it, or they may not.   Each of these components has their own update schedule (again, change!), but there is some flexibility in this approach (you can upgrade only the individual components); it does make the road map complex compared to traditional database platforms.

Big Data doesn’t always mean Big Data.

Perhaps the hardest thing to embrace about Big Data in general (not just Hadoop) is that the nomenclature doesn’t necessarily line up with the driving factors; a Big Data approach may be the best approach for smaller data sets as well.   In essence, data can be described in terms of the 4 V’s:

  1. Volume – The amount of data held
  2. Velocity – The speed at which the data should be processed
  3. Variety – The variable sources, processing mechanisms and destinations required
  4. Value – The amount of data that is viewed as not redundant, unique, and actionable

A distributed approach (like Hadoop) is usually appropriate when tackling more than 1 of these four v’s; if your data’s just large, but low velocity, variety, or value, a single installation of SQL Server (with a lot of disk space) may be appropriate.  However, if your data has a lot of variety and a lot of velocity even if it’s small, a Big Data approach may yield considerable efficiency.  The point is that big data alone is not necessarily the impetus for using Hadoop at all.

Summary

Big Data & Hadoop are complex topics, and they’re difficult to understand if you approach them from a traditional RDBMS mentality.  However, understanding the fundamentals of how Big Data approaches are evolving, disparate, and generally applicable to more than just volume can lay a foundation for tackling the platforms.

 

 

Managing a Technical Team: Act Like a Good Developer

This is one of my favorite pieces of advice from my Managing a Technical Team presentation that I’ve been doing at several SQLSaturdays and other conferences: act like a good developer, with a different focus.  Most new managers, especially if they’ve been promoted from within (the Best Operator) model don’t know how to improve their management skills.  However, if you were to ask managers what makes a good developer, you’ll probably get a series of answers that are similar to the following broad categories:

Good Developers have:

  • a desire to learn,
  • a desire to collaborate, and
  • a desire for efficiency.

I could probably say that this is true for all good employees, but as a former developer, I know that the culture in software development places a lot of focus on these traits; system administrators usually have different focus points.  However, all technical managers SHOULD emulate these three traits in order to be effective.  Let me explain.

Desire to Learn

Let’s imagine Stacy, a C# developer in your company; by most accounts, she’s successful at her job.  She always seems to be up on the latest technology, has great ideas, and always seems to have a new tool in her toolkit.  If you ask her how she got started programming, she’d tell you that she picked it up as hobby while in college, and then figured out how to make a career out of it.  She’s an active member of her user group, and frequently spends her weekends reading and polishing her craft; while not a workaholic, she does spend a great deal of her personal time improving her skills.  She’s on a fast track to managing a team, in part because of her desire to learn.

One day, she gets promoted, and is now managing the development team; she struggles with the corporate culture, the paperwork, laying out a vision, and can’t seem to figure out how to motivate her team to the same level of success that she was achieving as a developer.  The problem is that her desire to learn no longer syncs up with her career objectives;  Stacy needs to invest her educational energies into learning about management.

Ask a new IT manager what books they’re reading, and typically the response will be either none at all, or a book on the latest technology.  We tend to cling to that which is familiar, and if you’ve got a technical background, it’s easy and interesting to try and keep focusing on that background.  However, if you’re serious about being a manager, you need to commit to applying the same desire to learn that you had as an employee to learning more about management.  Sure, pick up a book on Big Data, but balance it out with a book on Relationship Development.  Podcasts?  There’s management ones out there that are just as fun as the development ones.  Webinars? Boom.

Desire to Collaborate

Bob’s a data architect.  Everybody loves Bob, because he really listens to your concerns, and tries to design solutions that meet those concerns; if he’s wrong about something, he’s quick to own up to the mistake, and moves on.  He works well with others, acknowledging their contributions and adapting to them.  In short, Bob is NOT a jerk; nobody wants to work with a jerk.

Bob gets promoted to a management position, and he too struggles; he’s still hanging out with his former teammates, and is still going to the same conferences.  Everybody still likes Bob, but he’s having trouble guiding his team in an effective manner.  He hasn’t really built relationships with his new peers (other managers that report to his director), and hasn’t found ways to manage more effectively.  He’s collaborating, but with the wrong people.

As a new manager, you should continue to maintain relationships with your directs, but you need to build a relationship with your new team of peers.  Understand their visions, and find ways to make your team valuable resources to them. Reach out to other managers at user groups and conferences; build a buddy system of people based on your management path, not just your technical one.

Desire for Efficiency

If you sat down and had a conversation with any development team that was effective and producing results and asked them about their methodology, it wouldn’t be long before they started talking about frameworks.  Efficiency in development is derived from reusable patterns and approaches to problems; they’re tough to implement at first, but the long term gain is enormous.

As you’ve probably guessed, there’s management frameworks that can be very effective in a technical environment; investing time in implementing them can yield great efficiencies when faced with making decisions.  In my current environment, I use three:

  1. MARS – my own self-rolled approach to system operations; it’s not perfect, but it helps focus efforts.
  2. Kanban – allows me to see what our WIP (Work In Progress) is, and helps queue up items for work
  3. ITIL – we’re just starting to adopt this, but we’re working on isolating Incident Management from root cause analysis, as well as implementing robust change control processes.

The challenge with management frameworks is similar to that of development frameworks: bloat.  It’s too easy to get bound up in process and procedures when lighter touches can be used, but in most cases, the efficiency gained by having a repeatable approach to decisions allows you to respond quickly to a changing environment.

Summary

Management is tough, but it’s especially tough if you continue to focus on your technical chops as opposed to your leadership abilities.  Act like a good developer, and apply those same basic principles to your team.

Determining the Primary Key of a table without knowing it’s name

So, I’ve been trying to find more technical fodder for blog posts lately in order to get in the habit of blogging on a regular basis, so I thought I would explore a few of my higher-ranked answers on StackOverflow and provide a little more detail than that site provides.  This is my highest-rated answer (sad, I know, compared to some posters on the site):

Determine a table’s primary key using TSQL

I’d like to determine the primary key of a table using TSQL (stored procedure or system table is fine). Is there such a mechanism in SQL Server (2005 or 2008)?

My answer:

This should get you started:

SELECT *
    FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
        JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu ON tc.CONSTRAINT_NAME = ccu.Constraint_name
    WHERE tc.CONSTRAINT_TYPE = 'Primary Key'

 

Besides the obvious faux pas of using SELECT * in a query, you’ll note that I used the INFORMATION_SCHEMA views; I try to use these views wherever possible because of the portability factor.  In theory, I should be able to apply this exact same query to a MySQL or Oracle database, and get similar results from the system schema.

The added benefit of this query is that it allows you to discover other things about your PRIMARY KEYs, like looking for system named keys:

SELECT  tc.TABLE_CATALOG, tc.TABLE_SCHEMA, tc.TABLE_NAME, ccu.CONSTRAINT_NAME, ccu.COLUMN_NAME
FROM    INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
        JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu ON tc.CONSTRAINT_NAME = ccu.Constraint_name
WHERE   tc.CONSTRAINT_TYPE = 'Primary Key'
    AND ccu.CONSTRAINT_NAME LIKE 'PK%/_/_%' ESCAPE '/'

or finding PRIMARY KEYs which have more than one column defined:

SELECT  tc.TABLE_CATALOG, tc.TABLE_SCHEMA, tc.TABLE_NAME, ccu.CONSTRAINT_NAME
FROM    INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
        JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu ON tc.CONSTRAINT_NAME = ccu.Constraint_name
WHERE   tc.CONSTRAINT_TYPE = 'Primary Key'
GROUP BY tc.TABLE_CATALOG, tc.TABLE_SCHEMA, tc.TABLE_NAME, ccu.CONSTRAINT_NAME
HAVING COUNT(*) > 1

Error 574 Upgrading SQL Server 2012 to SP1

This blog post is way overdue (check the dates in the errror log below), but I promised our sysadmin that I would write it, so here it is.  Hopefully, it’ll help some of you with this aggravating issue.  During an upgrade of our SQL cluster, we ran into the following error as we attempted to upgrade one of the instances:

2014-04-15 23:50:14.45 spid14s     Error: 574, Severity: 16, State: 0.

2014-04-15 23:50:14.45 spid14s     CONFIG statement cannot be used inside a user transaction.

2014-04-15 23:50:14.45 spid14s     Error: 912, Severity: 21, State: 2.

2014-04-15 23:50:14.45 spid14s     Script level upgrade for database ‘master’ failed because upgrade step ‘msdb110_upgrade.sql’ encountered error 574, state 0, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.

2014-04-15 23:50:14.45 spid14s     Error: 3417, Severity: 21, State: 3.

2014-04-15 23:50:14.45 spid14s     Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

Google wasn’t helpful; there’s apparently lots of potential fixes for this, none of which helped.  The closest match we found was that we had some orphaned users in a few databases (not system databases), which we corrected; the upgrade still failed.  We eventually had to contact Microsoft support, and work our way up the second level technician.  Before I reveal the fix, let me give a little more background on how we got orphaned users.

You see, shortly after we upgraded to SQL 2012 (about a year ago), we did what many companies do; we phased out a service offering.  That service offering that we phased out required several database components, including a SQL login associated with users in the database, and several maintenance jobs that were run by SQL Agent.  When we phased out the service, those jobs were disabled, but not deleted.  Our security policy tracks the last time a login was used; if a login isn’t used within 60 days, it’s disabled.  30 days after that (if no one notices), the login is deleted.  Unfortunately, our implementation of this process missed two key steps:

  1. The associated user in each database was not dropped with the login (leaving an orphan), and
  2. any job that was owned by that login was also not dropped or transferred to a sysadmin.

The latter was the key to our particular situation; the upgrade detected an orphaned job even though that job was disabled, and blocked the upgrade from going forward.  Using trace flag –T902, we were able to start the server instance and delete the disabled job.  We then restarted the server without the trace flag, and the upgrade finished successfully.

 

Resources:

Find and fix all orphaned users for all databases.

Brent Ozar Unlimited’s sp_blitz will find jobs that are owned by users other than sa.

A tiny step in the right direction #SQLPASS

Ah, summertime; time for the annual “community crisis” for the Professional Association for SQL Server.  I’ve tried to stay clear of controversies for the last couple of years, but it’s very hard to be a member of such a passionate group of professionals and not have an opinion of the latest subject d’jour.   The short form of the crisis is that there’s questions about how and why sessions get selected to present at the highly competitive Summit this year (disclaimer: I got selected to present this year).  For more details, here’s a few blog posts on the subject:

The point of my post is not to rehash the issue or sway your opinion, dear reader, but rather to focus on a single tiny step in the right direction that I’ve decided to make.  One of the big issues that struck me about the whole controversy is the lack of a repeatable objective tool for speaker evaluations.  As a presenter, I don’t always get feedback, and when I do, the feedback form varies from event to event, meeting to meeting.  Selection committees are forced to rely on my abstract-writing skills and/or my reputation as a presenter; you can obfuscate my identity on the abstract, but it’s tough to factor in reputation if do that.

While I agree that there are questions about the process that should be asked and ultimately answered, there’s very little that I can do to make a difference in the way sessions get selected.  However, as a presenter, and a chapter leader for one of the largest chapters in the US, I can do a little something.

  1. I am personally committing to listing every presentation I make on SpeakerRate.com, and soliciting feedback on every presentation.  To quote Bleachers, “I wanna get better”.
  2. I will personally encourage every presenter at AtlantaMDF to set up a profile and evaluation at SpeakerRate for all presentations going forward.
  3. We will find ways to make feedback electronic and immediate at the upcoming Atlanta SQLSaturday so that presenters can use that information going forward.
  4. I will champion the evaluation process with my chapter members and speakers, and continue to seek out methods to improve and standardize the feedback process.

Do I have all of the right answers? No.  For example, SpeakerRate.com seems to be barely holding on to life; no mobile interface, and a lack of commitment from its members seems to indicate that the site is dying a slow death.  However, I haven’t found an alternative to provide a standard, uniform measure of presentation performance.

Do I think this will provide a major change to the PASS Summit selection?  Nope.  But I do think that a sea change has to start somewhere, and if enough local chapters get interested in a building a culture of feedback and evaluation, that could begin to flow up to the national level.