Blogging is FUN!

#SQLSaturday – Is it really about the tools?

There’s been some interesting conversation on the SQLSaturday slack channel regarding the admin tools for SQLSaturday. It was spawned, in part, by this great set of ideas proposed by the Godfather of SQL Saturday, Andy Warren, regarding changing the way that software development for the tools is handled by PASS HQ:

“So if that is all the way on one side of the scale (super closed system), the far other side is  to open source it. Open source is also not simple. If you’re on the PASS Board you have to care about the potential loss of intellectual property. Scoff do you? No, there is no magic in the code, but it’s sweat equity and it’s a substantial part of what drives new members (and new email addresses for existing members) into the mailing list. Do you really want people forking it and spawning variations under different names?

Is there a middle ground? Sure. Let’s put together a straw man of what it might look like:

  • PASS puts the source code into a private Github repo (because all devs love git!) along with a masked data set they can load/restore
  • Write an agreement to get access to the source code and agree to not republish the code, plus convey to PASS the IP rights to new stuff
  • Write the governance process. This is the hardest piece. Who will approve pull requests? Who decides which features get added? Who will do the testing? How often will releases be done (since PASS has to coordinate that)? Code standards. Rules about fonts and logos – all the stuff you deal with any dev shop.
  • Down the road a little build a true dev environment where the latest code can be loaded and tested.”

It should be noted that Andy wrote (or oversaw) most of the original code for the SQLSaturday admin tools, so he’s no crackpot; he knows software development, he knows SQLSaturday, and he knows how to get things done. In fact, as I was writing this post, I went back and read some of my original posts about SQLSaturday #13 (back in 2009), I found myself reminiscing about all the advice he’s given to me (and countless others); when Andy proposes something, it’s usually a good idea to listen. And, when Andy says he wants feedback, he means it.

So here’s my feedback, based on the events I’ve helped run (I’ve lost count; it’s somewhere around 15); I question whether PASS needs to be in the admin tools game at all. 2018 is a very different landscape than 2007 (the very first SQL Saturday). Tools like EventBrite, Meetup, PaperCall.io, and Sched.com can provide a lot of the support required for the daily activities of running a SQLSaturday. Most are free for smaller events. All come without the cost of maintenance and support that are currently required to run the current admin site. By the way, I think the current tools are fine, but there do seem to be some ongoing reliability issues.

I brought this up on the Slack channel, and Steve Jones had some great counter arguments, including issues with integration and the recurring cost of these tools. I’m not sure that integration is an issue; I think that events have four different audiences, with four different needs:

  1. PASS needs members. They want email addresses from attendees to build their membership.
  2. Sponsors need leads; email addresses are great, but interested people are better (that’s usually achieved by the raffle system).
  3. Speakers need to manage submissions, and know where they’re supposed to be.
  4. Attendees need to register, order lunch, and see the schedule.

I’m not sure that having a single system to try and do everything is needed. I can envision PASS setting up a website and an email address, and then sponsors using a tool like Eventbrite to manage registrations. They can supply those email lists to PASS after the event to be imported into PASS’s databases. They can use a tool like Sched.com to manage speakers calls and build schedules. Eventbrite can be used to build the equivalent of Speedpass natively.

Thoughts?


Presenting at #DevOpsDays #Nashville

Very excited to be presenting an Ignite talk at #DevOpsDays #Nashville (October 17-18, 2017): Tactical Advice For Strategic Change In A Brownfield

It’s only a 5 minute presentation (20 slides; slides change every 15 seconds), but I’m stoked about it. It’s going to be my first DevOps talk that has absolutely nothing to do with SQL Server; finally starting to go in a new direction, and focus on Culture, Lean, and Sharing. It’s a great little conference, and I’m grateful to go back for my second year (this time as a speaker).

Now, I just got to develop the slide deck J

#DevOps: Remote Workers and Minimizing Silos

Been a while since I’ve posted, so I thought I’d try to put down some thoughts on a lightweight topic. I’m a full-time remote worker; I have been for the last 10 years). My company has embraced remote workers, and provides lots of tools for people to contribute from all over the country (including the wilds of North Georgia); tools include instant messaging clients, VOIP, remote presentation software, etc. Document sharing and discussion is easy, but as you probably know, DevOps is as much about relationship building as it is about knowledge sharing. How do you minimize silos between teams when teams aren’t physically located near each other?

Here’s some different methods:

  1. 1. First, in-person communication provides the greatest avenue for relationship-building. Bringing a remote worker in from the field from time to time can greatly reduce isolation. If your chief developer is in Wisconsin, and your main sysops guy is in Georgia, flying both in periodically is probably the best way to create opportunities for conversation. Better yet, send them both to a conference somewhere in between.
  2. If in-person conversation is the gold standard for discussion but isn’t an option for economic or practical reasons, seek methods to emulate that experience. Conference calls or web conferencing tools are common, but video conferencing adds an additional dimension to discussions. In general, the higher the bandwidth, the better because it forces “presence” in conversations.
  3. Encourage remote employees to add depth to relationships by providing them with a virtual space to connect. Internal blogs (with personal pictures or activities), or slack channels for goofing off provide teams with meta information beyond their work ability. Knowing that another person loves Die Hard as much as you do gives you a common place to start building relationships.
  4. Organize virtual non-work events, such as multi-player gaming marathons (Leeroy Jenkins would be proud; NSFW) or virtual parties.

The main point is that while face-to-face interaction is desirable, it isn’t necessary. Employees (and companies) can thrive if they actively seek methods of encouraging high-bandwidth interactions with depth. Distance increases difficulty, but it’s not insurmountable.

Feel free to drop a suggestion for enhancing remote communication and decreasing silos.

Oh, The Places You’ll Go! – #SQLSeuss #SQLPASS

Last week, I had the privilege to speak at the annual PASS Summit; I got to present two different sessions, but the one I’m the most proud of was my Lightning Talk: Oh, the Places You’ll Go! A Seussian Guide to the Data Platform. I bungled the presentation a bit (sorry for those of you who want to listen to it), but I feel pretty good about the content. I’ve presented it below, with the slides that I used for the talk.

The goal of this presentation was to explore the Microsoft Data Platform from the perspective of a SQL Server professional; I found this great conceptual diagram of the platform from this website a while back, and wanted to use it as a framework. I figured the best way to teach a subject was the same way I teach my 3-year-old: a little bit of whimsy.

Enjoy.

You have brains in your head

And SQL Skills to boot

You’ll soar to great heights

On the Data Platform too

You’re on your own, and you know what you know,

And YOU are the one who’ll decide where to go.

You’ve mastered tables, columns and rows, OHHHHH MYYYY

You may even have dabbled in a little B.I.

You’re a data professional, full of zest,

But now you’re wondering “What comes next?”

Data! It’s more than just SQL,

And there’s a slew of it coming, measured without equal.

Zettabytes, YotaBytes, XenoBytes and more

All coming our way, faster than ever before.

So what should we do? How should we act?

Should we rest on our laurels? Should we lie on our backs?

Do we sit idly by, while the going gets tough?

No… no, we step up our game and start learning new stuff!

 

Oh, the places you’ll go!

ARCHITECTURE

Let’s start with the Theories,

The things you should know

Designing systems as services,

Are the route you might go.

Distributed, scalable

Compute on Demand

The Internet of Things

And all that it commands.

Infrastructure is base,

Platform is in line

Software and data

Rest on top of design

Once you’ve grasped this

Once you’ve settled in

You’ve embraced cloud thinking

Even while staying on-prem.

But beyond the cloud, there’s data itself.

Structured, polyschematic, binary, and log

Centralized or on the edge,

Some might say “in the fog”

Big Data, Fast Data, Dark, New and Lost

All of it needs management, all at some cost

There’s opportunity there to discover something new

But it will take somebody, somebody with skills like you.

Beyond relational, moving deep into insight

We must embrace new directions, and bring data to life

And there’s so many directions to go!

ADMINISTRATORS

For those of you who prefer administration

System engineering and server calibration

You need to acknowledge, and you probably do

You’ll manage more systems, with resources few.

Automation and scripting are the tools of the trade

Learn powershell to step up your game.

Take what you know about managing SQL

And apply it to more tech; you’ll be without equal

Besides the familiar, disk memory CPU

There’s virtualization and networking too

In the future you might even manage a zoo,

Clustering elephants, and a penguin or two.

 

But it all hinges on answering things

Making servers reliable and performance tuning,

Monitoring, maintenance, backup strategies

All of these things you do with some ease.

And it doesn’t matter if the data is relational

Your strategies and skills will make you sensational

All it takes is some get up, and little bit of go

And you’re on your way, ready to know.

So start building a server, and try something new

SQL Server is free, Hadoop is too.

Tinker and learn in your spare time

Let your passions drive you and you’ll be just fine

DEVELOPERS

But maybe you’re a T-SQL kind of geek,

And it’s the languages of data that you want to speak

There’s lots of different directions for you

Too many to cover, but I’ll try a few

You could talk like a pirate

And learn to speak R

Statistics, and Science!

I’m sure you’ll go far

Additional queries for XML and JSON

Built in SQL Server, the latest edition.

You can learn HiveQL, if Big Data’s your thing

And interface with Tez, Spark, or just MapReducing

U_SQL is the language of the Azure Data Lake

A full-functioned dialect; what progress you could make!

There’s LINQ and C-Sharp, and so many more

Ways to write your code against the datastores

You could write streaming queries against Streaminsight

And answer questions against data in flight.

And lest I overlook, or lest I forget,

There’s products and processes still to mention yet.

SSIS, SSAS, In-memory design

SSRS, DataZen, and Power BI

All of these things, all of these tools

Are waiting to be used, are waiting for you.

You just start down the path, a direction you know

And soon you’ll be learning, your brain all aglow

And, oh, the places you’ll go.

And once you get there, wherever you go.

Don’t forget to write, and let somebody know.

Blog, tweet, present what you’ve mastered

And help someone else get there a little faster.

Feel free to leave a comment if you like, or follow me on Twitter: @codegumbo

#DevOps Two Books for Operations

Over the last couple years, there’s been a subtle shift in my responsibilities at my day job (and my interests in technology overall).  I’ve been doing much less database development and administration work, and more general system architecture work.  That’s harder to write up in blog posts than SQL code, so I’ve struggled with writing, but I want to get back into the habit.  So excuse the choppiness, and let me try to put some thoughts on digital paper.

I’m pushing very hard for my company to adopt DevOps principles.  There’s a lot of material out there about DevOps from the developer perspective, but there’s few resources for those of us on the operations side of the house.  In a pure sense, there’s no such thing as sides, but in a regulated industry like healthcare or financial services, old walls are tough to break down, so they’re useful as organizational frameworks for general responsibilities.  However, we are all developers, whether or not we sling code or manage infrastructure as code; the goal is to produce repeatable patterns and tools that allow growth and change.

Two great books that I’m reading right now are:

The Practice of Cloud System Administration by Limoncelli, Chalup, and Hogan.  Tons of practical advice for building large-scale distributed processing systems, and DevOps philosophy is woven throughout (and specifically highlighted in Chapter 8).  This is one of those books that you’ll feel like diving in on some sections, and skimming over others; it’s a through examination of system administration from development through implementation, so there’s lots of conceptual hooks to grab hold of (and conversely, things that you may not have experienced).

The second book that I’ve recently started reading is Site Reliability Engineering: How Google Runs Production Systems.  This book is a collection of essays which explore Google’s method of approaching reliability; like most things Google, Site Reliability Engineering is similar to DevOps, but specific to the ways that Google does thing.  It’s also light on documentation (insert joke about Google and beta products here).  However, it does offer several insights into day-to-day system administration at Google.  While the SRE model is not exactly like DevOps, there’s lots of overlap, and differences may be attributed more to practice than to concepts.

More to come.

 

Where’s your slack?

I’ve been rereading the book Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency recently.  As I alluded to in my last post, my life has been rough for the last few months.  My nephew’s passing took the wind out of an already saggy sail; I’ve spent a great deal of time just trying to balance work, family, and life in general.  Some people turn to counselors; I turn to project management books.

The premise of the book is that change requires free time, and that free time (slack) is the natural enemy of efficiency.  This is a good thing; if you are 100% efficient, you have no room to affect change.  Zero change means zero growth.  I’ve been a proponent of slack for a while (less successfully than I’d like); it makes sense to allow people some down time to grow.  Just to be clear, slack isn’t wasted time; it’s an investment in growth.  Slack tasks include:

  • Research into interesting projects.  Lab work allows you to experience the unexpected, which gives you time to prepare for the unexpected in production
  • Building relationships. Teams are built on trust, and trust is earned through building relationships.  Teams that like each other are more likely to be successful when it comes to problem solving.
  • Shadow training.  Allow team members to work in other teams for a while; learn how the rest of the company operates.

In short, slack is necessary in order to promote growth; if you want your organization to stay ahead of it’s competition, cutting resources in the name of efficiency is sure-fire plan for losing.  The best advice for slack time is the 80/20 rule; run your team at 80% capacity, and leave 20% for slack.  In the case of emergency, slack time can be temporarily alleviated, but it’s the responsibility of management to return to normal work levels as soon as possible.

So what does this mean for me personally?  In the name of efficiency, I let slack time go.  I work a full time job, a couple of different consulting gigs, act as a chapter leader for AtlantaMDF, and am an active father.  I have no hobbies, and suck at exercise.  I love to travel, but trips are planning exercises in and of themselves.  In short, I have zero slack to deal with emergencies.  When something goes wrong and time gets compromised, I immediately feel guilty because I’ve robbed Peter to pay Paul in terms of time.  That’s not living,

I’m done with that.

Change is incremental, so I’m not planning on upsetting the apple cart just yet, but I am trying to figure out ways to make my slack time more of a recharge time.  Don’t get me wrong; I waste time.  I sit and stare at Facebook like the rest of the modern world; I binge on Netflix when new series drop.  That’s not slack, and it doesn’t recharge me. Slack is using free time to grow, to change.  My goal is to find an hour a week for growth-promoting free time.  I’ll let you know how I’m doing.

 

#BigData is coming; what should SQL Server people do about it?

I’ve been presenting a lot on Big Data (specifically Hadoop) from the perspective of a SQL Server DBA, and I’ve made a couple of recent observations.  I think most people are aware of the fact that data generation is growing at a staggering rate, with some estimates as high as 44 zettabytes by the year 2020; what I think is lacking in the SQL Server community is a rapid movement among database professionals to expand their skills to highly scalable Big Data platforms (like Hadoop) or streaming technologies.  Don’t get me wrong; I think there’s people out there who have made the transition (like Michelle Ufford; SQLFool, now Hadoopsie), and are willing to share their knowledge, but by and large, I think most SQL Server professionals are accustomed to working with our precious relational system.

Why is that?  I think it boils down to three reasons:

  1.  The SQL Server platform is a complex product, with ever increasing opportunities to learn something new.  SQL 2016 is about to drop, and it’s a BIG release; I expect most SQL Server people to wrap themselves up in new features and learn something new soon.  There’s always going to be a need for deep expertise, and as the product continues to mature and grow, it requires deeper knowledge.
  2. Big Data tools are vast, untamed, and very organic.  Those of us accustomed to the Microsoft development cycle are used to having a single official product drop every couple of years; Big Data tools (like Hadoop) are open-source, prone to various forks, and very rapidly developed.  It’s like drinking from a firehose.
  3. It’s not quite clear how it all fits together.  We know that Microsoft has presented some interesting data technologies as of late, but it’s not quite clear how the pieces all work together; should SQL Server pros learn Azure, HDInsight, Hadoop?  What’s this about U-SQL?  StreamInsight, Spark, Cortana Analytics?

The first two reasons aren’t easily solved; they require a willingness to learn and a commitment to study (both of which are difficult resources to commit).  The third issue, however, can be easily addressed by the following graphic.

This is Microsoft’s generic vision of a complete end-to-end analytics platform; for the data professional, it’s a roadmap of skills to learn.  Note that relational engines (and their BI cousins) remain a part of the vision, but they’re only small pieces in an ever-increasing ecosystem of database tools.

So here’s the question for you; what should SQL Server people do about it?  Do we continue to focus on a very specific tool set, or do we push ourselves (and each other) to learn more about the broader opportunities?  Either choice is equally valid, but even if you choose to become an expert on a single platform in lieu of transitioning to something new, you should understand how other tools interact with the relational system.

What are you going to learn today?

My good Karma (Go) for the day

A couple of weeks ago, I briefly mentioned the Karma Go, and how I intended to use it to stay connected at the 2015 PASS Summit in Seattle.  The good news is that the little fella worked great inside the convention center; the better news was that the convention center had upgraded its WIFI capabilities, so my Go was unnecessary.  However, something else has recently happened that may be helpful to people closer to home; Karma has recently announced that a monthly unlimited plan is now available for the Go.  Speed’s capped at 5MB, which isn’t super-fast – unless you live in Jackson County, Georgia.

You see, here in Jackson County, we have only one option for broadband: Windstream DSL.  When it works (as it has for me), it’s fine; I have the 12MB package, which is OK for working from home.   If I have an outage, I fall back to my Go and I’m covered (that’s happened twice since I got the device in August). However, a lot of people in Jackson County pay for a service that is unreliable (to say nicely); quotes pulled from a Facebook group dedicated to Windstream issues in the county:

Really tired of the constant outages. I’ll reset and the Internet comes back in for 2 minutes then goes right back out. It’s just awful!”

Or sometimes it works, but at speeds far less than what you’d expect

Service today at the office [in] Pendergrass…. [(0.57 MB down/.42 MB up)] makes life interesting trying to get business done… Thank God my Ipad is ATT. “

Or sometimes they don’t show up at all

I moved. Windstream said they’d be here today. I called at 4:30 to ask when since they haven’t called. The service rep said we were next in que and it is now 7:30. No show. How typical. Lies and the runaround. Windstream is beyond incompetent.”

Windstream was hit with a $600,000 fine by the Georgia Governor’s Office of Consumer Protection, and while the service has improved for some, many of my neighbors are still being overcharged for a service they’re not receiving. Some folks in the group had read about the new unlimited plan for Karma Go and wondered if it could be used to replace Windstream. The blog announcement for Neverstop includes this caveat:

“Can I ditch my home internet provider?

Neverstop isn’t meant to be a replacement for your home internet (yet). It’s a way to have internet anywhere, anytime. Speeds on Neverstop aren’t fast enough to be comparable to a wired home internet connection. It’s not practical for most people to use Neverstop as their only internet connection, but if you’re a light user and looking to cut down on costs, you’re welcome to give it a try. Lots of us at the Karma office are already using it as a replacement for a phone plan. Who needs minutes and texts these days?”

I thought I would help people decide for themselves; I asked the Facebook group for general locations in Jackson County where people were interested, and I headed out.

Here’s my lessons learned:

Jackson County, Georgia is bigger than I thought.

I had originally allocated an hour for my trip, but it took nearly 2 (and the starting point above is not near my house). The area is semi-rural; there’s lots of little suburban areas and towns surrounded by farms and ranches. I-85 runs through it, so cell coverage (even Sprint) is pretty good. I streamed music over the Go the entire trip, and had no hiccups; everywhere I stopped (and whenever I glanced at it), I had at least 2 bars of signal, and usually 3. In hindsight, I probably should have streamed video as a test, but overall, I think it was a fair representation of coverage.

Speed tests were good (mostly).

Below is a chart of my findings, complete with download and upload speeds. I used two different speed tests, and got comparable results from both, so I’m listing the worst speed for each spot check.

Location

Download (Mbps)

Upload (Mbps)

Durham Dr, Hoschton

5.36

2.50

Oak Grove Church, Jackson Trail, Hoschton

4.13

4.22

Jefferson Memorial Stadium, Jefferson

4.93

2.40

Real Deals, Jefferson

0.95

1.72

Brockton Loop (Far Side), Nicholson(?)

5.39

10.37

Thyatria Brockton Road, Jefferson

4.00

6.12

QuikTrip, Pendergrass

2.99

5.14

Diamond Hill Church Road, Maysville

2.46

2.87

Sims Bridge, Commerce (Banks County)

5.61

3.66

 

As you can see, the Karma Go on the Sprint LTE network provided broadband speeds throughout my journey, but they weren’t always high speeds. The finding near the Real Deals store concerned me, but it may have been related to where I was parked. Overall, it may be worth investigation as a replacement for home internet services within certain contexts (explained below).

Possible Pros for Karma Go – Neverstop plan

If you’re a Jackson County resident and looking to escape from the Windstream experience, the Neverstop plan has some benefits that may make it worth considering.

  1. They offer unlimited internet connection for $50/month, capped at 5Mbps downup (I think it’s 5Mbps both ways).
  2. The plan allows for three devices to be connected to the Internet at a given time; if a fourth device connects using the plan, one of the other ones will be disabled until the number gets back to 3.
  3. They offer a 45 day return period for the device, which retails at $149. I don’t think that refund covers any purchased data, but if you feel like the device CAN’T replace your home internet service, will work as a reasonable backup, they let you swap plans at any time.
  4. They also have a referral program; here’s the link to my code: https://yourkarma.com/invite/stuart4873. If you decide to purchase the device, you get $10 off, and I get a $10 credit.

Possible Cons for Karma Go – Neverstop plan

Here’s the possible problems that I see with using Karma Go as a replacement for your home broadband service:

  1. LTE technology is based on cell service, which is usually less stable than wired connections. Given the instability of Windstream, however, I don’t know if that’s true or not. My single sample today may not accurately reflect coverage at your home; that’s why I’d recommend trying it for 30 days before dropping your old service.
  2. A hotspot is NOT a router. This device allows you to connect a computer or other device over WiFi to the Internet, but not to each other. In other words, your devices won’t see files on each other directly; if you’ve got advanced networking needs or a lot of devices, this may not be a great fit.
    1. A perk of this is that each device that’s connected gets its own 5Mbps pipeline; that pipe is not shared among your devices.
    2. You may be able to purchase a router that allows you to create an internal Wifi network for your devices, and then connect over another Wifi channel to the hotspot (like this one). You’d get connectivity between your devices (and override the 3 devices connected to your Karma Go rule), but they’d all split the 5Mbps pipeline.
  3. The company itself is young. Karma’s only been around for a few years, so there’s no telling how reliable they are. They’ve had some shipping problems and communication problems of their own, but that’s part of being a startup.

Final Thoughts: Should You Buy It?

If you’re unable to get reliable service from your provider, and you have good coverage from Sprint, and you have a limited number of network devices, it’s probably worth your time to investigate the Go as a replacement. Again, a 45 day return period gives you a lot of time to put it through its paces. I love it as a backup option (the Refuel plan), but it sounds like many of you should give it a whirl as a replacement.

#SQLPass #Summit15 Gear: My KarmaGo

Just a quick post as I’m packing up for the Professional Association for SQL Server Summit 2015; this year, I’m carrying a mobile hotspot with me: my Karma Go.  Just to state the obvious, yes, I am aware that the Washington State Conference Center has free wifi. I also know that wifi gets horribly overloaded in certain areas (like the keynote rooms) when thousands of device-carrying database people begin tweeting at once.  I also know that most people have hotspots on their phone, but I don’t (corporate phone; unlimited data, no sharing).

So why the Go?  Besides the fact that I want to stay connected with more than just my phone, I also like the fact that it offers a reward for sharing.  You see, Karma’s data plan consists of two parts:

  1. Pay-as-you-go data.  I filled up with a bunch of data (mostly to use when my home internet goes down), and I refill when I run out.  No monthly subscription,  so I’m not paying twice for unused Internet.
  2. Sharing earns data.  If a new user connects to my hotspot, they get 100MB of free data, and I get 100MB of data added to my account.  Easy-peasy (my SSID is “Free Karma By @codegumbo”) .

Checking the coverage maps (Karma runs on Sprint LTE), it looks like I’ll have great coverage.   Let’s see if I can stay connected through the keynote this year 🙂

Agile Perspectives: Fixed or Variable Length Iterations

Got into a deep discussion with a colleague tonight over different approaches to Agile development (yes, I’m a geek; why do you ask?).  I’m a Fixed Length Iteration kind of guy, particularly since I’ve spent most of time on the Operational side of the house recently.  He’s a Variable Length Iteration fan, and I thought the arguments on both sides of the fence were compelling enough that I wanted to blog about them.

Really, I’m doing this for my adoring fans.

Actually, I’m doing this because I feel guilty about not blogging, and thought this was at least SOMETHING to warrant paying for a domain year after year.

Anyway, here’s the breakdown of the two arguments; let me start by spelling out some common assumptions.  I’m assuming that you’re reading this because you have some interest in development, and more specifically, some interest in agile development.  If you have those interests, then I’m also assuming that you know what an iteration is.  If not, the hyperlinks in the previous statements should help educate you and steer you clear away from this blog to more fascinating subjects.

The Argument for a Fixed Length Iteration

There’s a couple different versions of the Fixed Length Iteration, based on either a calendar (January, February, etc.) or a cycle (every four weeks); the goal is to commit to a ship date, and stick to it time after time.  I’m more of a fan of a calendar-based Fixed Length Iteration; the actual length of the iteration varies (February is short), but development is consistently wrapping up every month, and every month code is being shipped out the door.  My reasons for supporting this model are as follows:

  1. People are cyclical creatures. We live from paycheck to paycheck; we hate Mondays and work for the weekend.  Having a fixed length cycle (with some minor variation if you based that cycle on a calendar) helps in estimating and planning what gets done and when.
  2. A fixed cycle forces developers to think in terms of components. If you’re expected to ship working code at the end of each month, you start writing code that is bite-size, so that you can leave out pieces that aren’t finished (rather than waiting until the whole thing is done).  Bite-sized code is easier to test and deploy with reduced risk.
  3. A fixed cycle means that it’s easy to change directions without stopping development efforts. Sometimes business priorities change; a fixed length cycle allows developers to continue moving forward until the end of the iteration and change gears there.  The start of a new iteration is never far away, so most businesses can wait until the next month rather than waiting till an unknown end of a sprint.

The Argument for a Variable Length Iteration

I’m trying to give this perspective a fair shake, even though I don’t subscribe to it; in a variable length iteration model allows business and development to scope out work at the beginning of an iteration and estimate a ship date regardless of the cycle or the calendar; the goal is to allow code to mature and be more stable.  My friend subscribes to this model because:

  1. Variable length iterations resemble traditional projects. There’s a scope estimate, requirements gathering, and work estimates (and traditional slippage).  Most agile purists immediately scream “WATERFALL”, and there’s some truth to that, but a variable length iteration is comfortable to business.  It’s like Mr. Roger’s house shoes.
  2. They lend themselves to continual shipment of code. If the iterations are variable, developers can focus on one task from start to end, and begin to operate on separate iterative cycles; if you have a development staff of 5, you could theoretically have up to 5 separate iterations going on at the same time; when a developer finishes, they ship their contribution without waiting on the sprint to end.
  3. This fragmentation of the iteration allows for sudden, non-disruptive change. If there are multiple iterations occurring for each independent block of code, and business needs change for one line, then only that line has to shift gears.  There’s no impetus to wait; you stop where you are on the one piece, and move on to the next piece.

Your Choice?

I’d love feedback on this; as I’ve stated from the outset, I’m a fixed length iteration guy.  What did I miss?  Are there other benefits to the Variable Length model that I’m missing?