Tasktop 2.8 released, Serena partnership announced, death to timesheets

by Mik Kersten, August 6th, 2013

Filling out time sheets is about as fulfilling as doing taxes. This mind-numbing activity is an interesting symptom of what’s broken with the way we deliver software today. What’s worse than the time wasted filling them out is the fact that the numbers we fill out are largely fictitious, as we have no hope of accurately recalling where time went over a course of a week, given that we’re switching tasks over a dozen times an hour. As Peter Drucker stated:

Even in total darkness, most people retain their sense of space. But even with the lights on, a few hours in a sealed room render most people incapable of estimating how much time has elapsed. They are as likely to underrate grossly the time spent in the room as to overrate it grossly. If we rely on our memory, therefore, we do not know how much time has been spent. (Peter Drucker. The Essential Drucker, ch. 16. Know your Time)

Tracking time is not a problem. When done well it’s a very good thing, given that time is our most scarce resource. Done right, time tracking allows us to have some sense for what the burn downs on our sprints are, and to predict what we will deliver and when. It allows us to get better at what we do by eliminating wasteful activities from our day, such as sitting and watching a VM boot up or an update install.

Effective knowledge workers, in my observation, do not start with their tasks. They start with their time. And they do not start out with planning. They start out by finding where their time actually goes. (Effective Drucker, ch 16. Know your Time) (Peter Drucker. The Essential Drucker, ch. 16. Know your Time)

Drucker was a big advocate of time tracking systems for individuals. With Agile, we have now learned how effective tracking story points and actuals can be for Scrum teams. Yet all of this goodness feels very distant when the last thing that stands between you and Friday drinks is a time sheet.

What we need is a way to combine the benefits of personal and team-level time tracking with those needed by the Project Management Office (PMO). With the Automatic Time Tracking feature of Tasktop Dev (screenshot below), we validated a way to combine personal time tracking with team estimation and planning. I still use this feature regularly to be a good student of Drucker and figure out where my own time goes, and many Scrum teams use it to remove the tedious process of manually tracking time per task.

While that automation is useful for the individual and the team, it did not help the PMO, that works at the program, enterprise and product level. PMOs use specialized project and portfolio management software such as CA Clarity PPM. So now, in our ongoing effort to create an infrastructure that connects all aspects of software delivery and to keep people coding and planning to their hearts’ content, we have stepped out of the IDE in order to bridge the divide between the PMO and Agile teams.

The Tasktop Sync 2.8 release includes updates to the leading Agile tools, such as support for RTC 4, HP ALM, CA Clarity Agile and Microsoft TFS 2012. It also ships the first Sync support for Rally and the TFS 2013 beta. The other big news is that we now are announcing a partnership with Serena in which both Tasktop Sync and Tasktop Dev will be OEM’d as part of the Serena Business Manager lifecycle suite. This new integration, which further cements Tasktop’s role as the Switzerland of ALM, will be showcased at Serena xChange in September, and ship this fall.

With Tasktop Sync 2.8, we have finally managed to connect the worlds of Agile ALM and PPM both in terms of task flow, and time reporting. While the support currently works for CA Clarity only, integrating these two worlds has a been a major feat in terms of understanding the data model and building out the integration architecture for connecting “below the line” and “above the line” planning (Forrester Wave). For the individual, it’s like having your own administrative assistant standing over your shoulder filling out the PPM tool for you, only less annoying and easier to edit after the fact. For the Agilistas, it’s about getting to use the methods that make your teams productive while making the PMO happy. And for the organization, it’s the key enabler for something that Drucker would have been proud of: automating the connection between strategy and execution.

Watch Tasktop webinars

The double-sided nature of requirements

by Dave West, July 26th, 2013

Bad requirements are often cited as the biggest reason why software projects fail. Badly understood or missed requirements drive business executives to despair. The business and development all blame each other for why things went wrong, and ultimately the end users don’t get the system they want. Over the last 20 years, the industry has tried many different ways of solving the requirements problem. Improvements range from formal methods and model-driven approaches to team organization and customer engagement. For many Agile projects, formality is discarded and replaced with strong customer interaction and the delivery of regular working software that can be reviewed with the end user. The system specifications are replaced with stories and epics that encourage collaboration with the customer, focusing on observable acceptance criteria, instead of describing in detail what the system does.  So has Agile solved the requirements problem?
I would argue in part Agile has provided a great way for teams to engage with customers, but for many complex, large systems requirements, management is still needed. In fact, for large projects or products, by encouraging no specification and formality, Agile methods have often undermined project success. I believe that for complex software projects, organizations still need to invest in requirements techniques and associated tools, but they need to understand the difference between requirements management and definition. Definition/management is an area that gets very confusing as we talk about requirements in project management and software delivery terms. A requirement is both a plan item and a specification, and unfortunately tool vendors have only amplified this problem. The Agile community has all but given up on the definition side of requirements, focusing instead on the project management or work planning perspective. Unfortunately, the truth is that requirements are both, and we need to effectively treat them in both ways.

At Tasktop, we work with this [inherent duality] of requirements daily when we are asked to create a real-time connection between the most traditional of requirements management tools to the most cutting edge of Agile and developer-centric tools.  For the purpose of illustration, I will describe how Tasktop builds our products and how requirements definition has risen as a formal discipline in support of our requirements management approach.

Tasktop has a simple mission: to connect the world of software delivery. That mission means that we build tools that help organizations connect their software delivery lifecycle. One of our products, called Tasktop Sync, is an integration hub for software delivery tools. Over the last four years, Tasktop Sync has evolved from a point-to-point integration to an infrastructure tool that integrates numerous software development tools and supports multiple artifacts and platforms. Tasktop engineering has always used Agile methods with stories being allocated to sprints, releases being X number of sprints and story points and velocity being used to plan. But as the complexity of Tasktop Sync grew and as we engaged with more partners, we found that engineers were increasingly asking ‘what does Sync do?’, not at the macro level, but at the detail level. For example, how does Sync transform a particular field type or handle a particular error condition? This problem was made even more intense as we built more and more connectors. Those connectors implemented similar functionality for Sync but with different product end points. We needed to describe a specification for integration. This specification describes how we expect a connector / integration to work. The implementation for each connector would be different, but the specification would be the same. Stories were a fantastic way of describing the item to be planned but gave the development and test teams very little in the way of details to ensure that each connector would implement this feature in a consistent way. Thus, we uncovered the double-sided nature of requirements.  Requirements are both a specification (and asset) and a task (or work item).

But as we explored this problem more, we found that the tools we were using to capture stories did not provide us with a great way of describing the specification; even worse, once we described the specification in the tool, we found we had no way to find it again – after all, a story is used to drive development, and once completed it’s banished into history, being only used for historic velocity information. We had to describe our specifications in a tool that allows version management and supports being able to find the information, and because we are an engineering company that is very focused on productivity and automation, a specification that can automatically build acceptance tests. A story is linked to this specification in the same way that code, tests and ultimately test results are linked to this artifact, but it is not the same artifact. In fact, we found that the stories described the creation or change to that specification and held meta-data associated with the work, not the details of the capability. As a result, we found that we could capture historic velocity not only about the teams, but also the feature we were supporting.

So what does all mean? In short, we need to think about the practice of requirements management as different from the practice of requirements. The two disciplines are linked but are different. Tool vendors who historically have merged these two activities need to stop and evaluate their products and try to separate the two approaches. That would allow the industry to have tools that allow the management of work and creation and maintenance of assets to be separate.

What do you think about this interesting double-sided behavior?  How have you dealt with it when deploying Agile at scale?

Watch Tasktop webinars

Betty Zakheim: “All Roads Lead to Tasktop”

by Betty.Zakheim, July 18th, 2013

Tasktop is connecting the world of software delivery – now that was an opportunity I just couldn’t refuse!

I’m really excited to have joined Tasktop. The company’s mission and products are near and dear to my heart, and I’m so pleased to be working with such a talented, dedicated and genuinely nice group of people.

It’s interesting though, that as I think back on the roadmap of my career, it seems that all roads led me to Tasktop; it’s just that until just a few weeks ago, I didn’t realize it.

Of course I was aware of Tasktop; the company is widely known for its ALM integration tools and platforms and, most recently, the introduction of the Software Lifecycle Integration framework. But for me, it was more than just awareness, it was admiration. As a former UX developer, I found the “task-focused interface” work of co-founder and CEO Dr. Mik Kersten very intellectually stimulating. As a former software engineer and engineering manager, I have to love a company that removes some of the process tedium from daily life, while making the whole team more efficient. And, as a (fairly) savvy business person, I love the idea that we’re not competing against the leaders in Application Lifecycle Management – our goal is to help everyone be more successful.

In my opinion, if you’ve ever worked on a software development team that was hindered by your colleagues working in silos, you have to admire Tasktop.

But the confluence didn’t stop at admiration.

I’ve been at this “software development” thing for, um, a while. I’ve had the pleasure of working at some truly innovative companies, and (with a tinge of modesty), being a bit of an innovator myself. So as I look at what Tasktop brings to the party, and I recount my own experiences, being called on to lead Tasktop’s marketing team feels like it was almost inevitable.

I started in technology as an engineer. I’m a bone fide, diploma-carrying computer engineer/computer scientist. My first roles were in what was then called “human factors,” but we now call User Experience. I worked on all sorts of interesting systems, from radar and air traffic control to quality management software for systems that test printed circuit boards and (in my last coding role) workflow or what’s now known as “Business Process Management” (BPM).

Eventually, I became the VP of Marketing and Product Management at that BPM company, InConcert (a spinout of Xerox research), where pivoted to use our workflow product as a way to integrate disparate systems. We initially concentrated on the telecommunications industry, because in 1996 deregulation led to a flurry of new telecoms companies that relied on each other (and each other’s systems) to operate.  The first business process we tackled was “service activation,” the process of establishing a customer’s account. At each step of the process (or task), we wrote “agents” to connect to the various systems needed to complete that step. It worked well, got terrific acceptance and eventually TIBCO software bought the company.

But the thing that always bothered me was how hard it was to write those agents. Back then, CORBA was the leading technology for this sort of thing.  Yes, tightly-coupled CORBA!

Fast-forward a few more years, and I had the opportunity to work at IONA (yes, the CORBA company!), when the up-and-coming technology was web services.  Going from being tightly coupled to loosely coupled was, without question, the way to go for integration technology. IONA adopted web services as an integration technology, and I had the pleasure of bringing those products to market.

When I left IONA, I left the world of integration technologies for a while. But I continued to work on products for software developers. In fact, one of my favorite jobs was with Rational Software.

Back then, Rational Software was an independent company and the leader in cross-platform software development tools. As a director, I was proud to lead all aspects of marketing for ClearCase and ClearQuest. Once IBM acquired us, I lead all product, industry and solutions marketing for the entire Rational Brand. While Rational itself was a substantial software company (around $800 million in revenue a year), I learned quite a bit about enterprise scale after joining IBM! Not only were our customers some of the largest in the world, but we were part of a very large company.

After IBM/Rational, I worked at few other notable development tools companies such as Progress Software and, most recently, SmartBear software.

In between all those jobs, I started my own consulting company, and it shouldn’t be a surprise that many of my clients catered to software development teams. I’ve had all sorts of roles: software engineer, engineering manager, consulting engineer, product manager, and now joining Tasktop; this will be my third company where I’ve been at the helm of the marketing team as VP of Marketing.

The role at Tasktop is a wonderful confluence of several things I hold near and dear: advancing the state of the art of software development and delivery, the integration of disparate systems and delivering a terrific user experience.

I couldn’t be more excited to be part of a team and a company that is so well positioned to make such a substantive difference is the software development and delivery process – and to do our part to change the world through better software.

Watch Tasktop webinars

Viva Eclipse Kepler!

by Dominika Lacka, July 15th, 2013

Say what? Another Eclipse DemoCamp at Tasktop? That’s right, and it was something to brag about! Tasktop Technologies, the Eclipse Foundation, and Pivotal hosted Eclipse Kepler DemoCamp at Tasktop HQ in downtown Vancouver for Eclipse enthusiasts, faculty and students who work with the Eclipse IDE.

The evening began with networking and snacks. We opted for delectable aliments doigt, with fine cheeses, le sandwiches, stuffed mushrooms and seafood, instead of pizza and pop that has become the staple food of many a developer (not you, of course, we know you have a more sophisticated palate).

David Green and Andrew Eisenberg then kicked off the evening with event intros and warm welcomes…

…followed by an icebreaker game…

This year, we played Big Data, where each participant was asked to gather data about other attendees. We thoroughly enjoyed it, maybe because we’re all geeks.

After the game, the data was collected and collated. It is now ready to be map-reduced by Hadoop. Here are the resulting statistics of the make-up of this year’s DemoCamp participants:

Yes No
Attended EclipseCon? 6 31
Commiter? 8 16
Student? 10 19
Created plugin? 21 9
Raised a bug? 8 7
Contributed? 10 13
Program daily? 40 5
Java? 21 8
IntelliJ? 0 29
Netbeans? 0.5 47
CS degree? 16 4

Note: these results reflect the skills of first time data-collectors (ie: numbers may not approximate reality).

…with all the attendees warmed up and well fed, the presentations followed…

David Green (Tasktop) Showed off the new super-secret tooling in Eclipse for working with GitHub.

Nieraj Singh and Kris De Volder (Pivotal) showed some of the new features of Spring Tool Suite 3.3.0, including the new Quick search feature and getting started guides.

Rafael Chaves (Abstratt) presented Cloudifier, a platform for rapid application development/deployment built on Eclipse Orion.

Deepak Azad (UBC) demoed new features in Eclipse JDT for the Kepler release, mostly new quick fixes and null type inferencing improvements.

Robin Salkeld (UBC) introduced holographic JVMs and a way to debug Java heap dumps. His technique allows you to attach a Java debugger to a heap dump and then execute queries against it using the Java debug interface.

Brendan Cleary (UVic Chisel) showed off Atlantis, which is a file viewer for massively huge files (60 Gb+). He described how Atlantis is used by the Canadian government for analyzing trace files to look for potential security breaches.

After the last presentation, everyone headed out for drinks to celebrate another successful Eclipse DemoCamp. Cheers!


AndrewA warm thank you must got out to Andrew Eisenberg for co-authoring this blog, as well as to all other organizers, sponsors and speakers who contributed their time and energy to making this event a success. If you’d like to be a speaker during a future Eclipse DemoCamp, please contact us. It’s a wonderful opportunity to show your stuff in a supportive circle of other Eclipse enthusiasts and have some fun while you’re at it. See you all next year!

Watch Tasktop webinars

It Takes A Village … Business Analysts: The Unsung Heroes

by Nicole Bryan, July 8th, 2013

Over the course of my career I’ve seen a lot of teams structured in a lot of different ways using a lot of different methodologies. But the ones that were successful always had one thing in common … a deep understanding that there are a variety of roles that have to be in place to deliver great software products to customers. My motto … it takes a village to deliver great products!

The Business Analyst is one of the roles that often is under-appreciated but can deliver significant benefits. I’ve always wondered why that is the case. After all, the goal is to keep your developers and testers focused on what they’re there for – coding “magical things” – but you need to make sure they code the right “magical things”! Business analysts are, in my opinion, the unsung heroes who do just that.

It takes a unique combination of skills to effectively capture the needs of your customers and creatively translate those needs into what to add to or change about your product. When a BA successfully does, they are essentially handing to the developers the proverbial “silver platter” – the developers can then “work their magic” using their creativity and innovation to bring those needs to life.

You may be thinking … “Really? Even for back-end integration software? Shouldn’t dev teams know what to build innately?“ Ironically, we see the value of business analysis even more for technically difficult and largely “under the covers” software like ours. Precisely because it is so technical and behind the scenes we find it is even easier to get lost in the weeds and code the wrong “magical things” or code the “magical things” incorrectly or with short sightedness. If we deliver a great technical solution but forget to meet the use case our customers need, that great technical solution doesn’t really matter.

One part UN Ambassador, one part translator, one part designer, one part information organizer – the role itself is fabulously varied and requires a very broad skill set. If you tried to map out a day in the life of a business analyst, I’m not sure you could because the days are so different!

Here at Tasktop, we take the “it takes a village” motto very seriously as we believe that by recognizing the different skills sets and contributions each of the different roles brings, we maintain our competitive edge – and deliver better product to our customers. To that end, we are expanding our team of business analysts. Check out our job description- join our village and help us build the right “magical things”.

Watch Tasktop webinars

British Columbia is a great place to start and grow a company

by Gail Murphy, June 18th, 2013

Vancouver British Columbia is well-known for many natural things: mountains and ocean, an opportunity to ski and kayak the same day, and home to the 3-time world champion Furious George ultimate frisbee team. Over the years, Vancouver has had a number of successful start-ups, such as Crystal Decisions, Flikr and Creo. Despite some successes, Vancouver’s start-up scene had always seemed kind of quiet… until recently that is.  So far in June, I’ve attended two events that really highlighted the entrepreneurship bug that seems to be going viral (in a good way) in Vancouver lately.

On Thursday June 6, the 2013 British Columbia Technology Impact Awards (TIA) dinner was held at the beautiful Vancouver Convention Center. Over 1000 people attended the event with many new companies vying for the awards. A nice highlight of the night was a review of the companies that have won the Company of the Year award over the past twenty years the awards have been given out. Last year, Tasktop was honoured to be named the “Emerging Company of the Year.” Just give us another year or two and we will be up there vying for the “Company of the Year” in BC. It was fantastic to see such a large crowd gathered with such a diverse set of technologies represented (from green approaches to pest reduction to nanotechnology to crowdfunding). The energy in the room was great and it is clear BC will be continuing to make its mark as a great place to nurture, grow and sustain technology companies.

Then on Friday June 7, I had the opportunity to speak at a Founder’s Friday event hosted by Women 2.0 at the Mozilla Vancouver offices. The group of people who gathered on a Friday evening was truly amazing and ranged from individuals with a first hazy idea of what might become a product, to those well underway with a start-up. I have never been in a room with so many women (and men) entrepreneurs at early stages of their companies.  The buzz was fantastic, the conversations flowed and I am sure we will be seeing many of those entrepreneurs at the BC TIAs in years to come.

All of these individuals have found out part of Tasktop’s secret: Vancouver is a place in which great talent resides, great ideas abound and great products result.

Watch out world, the BC tech industry is coming…

Watch Tasktop webinars

Why SLI Matters?

by Neelan Choksi, June 11th, 2013

We’ve spent the last decade watching the shifts and trends in an industry that is eating the world. Software is increasingly becoming the basis for competitive advantage in nearly every industry. As Tasktop has evolved from an organization that focused solely on the developer to an organization that is now focused on the end-to-end aspects of delivering software from idea to plans to code to tests to operations, we’ve learned a ton about software delivery and what it takes to be successful. Our customers have shared their challenges and struggles with us. All too often, these conversations are captured on white boards, where we help customers think through how they deliver software at a technical level but not necessarily as a business process.

Software Lifecycle Integration (SLI) is based on several decades of experience and knowledge. From Mik’s days at Xerox Parc and Intentional, Gail’s academic underpinnings, Dave’s experience with the Rational Unified Process and talking to thousands of customers and ISVs as a Research Director at Forrester, Nicole’s days at Borland working with their software lifecycle tools, Betty’s time developing go-to-market strategies at SmartBear, Lance’s experiences with modern requirements technologies at Accept, and even my view from a pure business world bouncing back and forth between startups and behemoth organizations developing software, we’ve all coalesced at Tasktop because each of us in our own way have lived the problems that we are trying to solve with SLI and our products. On top of the intuition that those experiences from our past provide us, the conversations with our customers and partners that we’ve been having since Tasktop started in January 2007 have been equally important. In many ways, it feels like this bootstrapped team of nearly 60 people have unbeknownst been working on Software Lifecycle Integration since the beginning. So needless to say, we think SLI is going to be big, really big!

As my colleague Dave West reminds me nearly every day, software has enabled the automation of nearly every business process (e.g., supply chain, customer relationship, purchasing, logistics) with one notable exception… itself. When you think about Software Delivery as a business process, it suddenly becomes clear as to why SLI matters. Integration is the underlying basis for automation. Without integration, business process automation is nearly impossible. Once information flows between the various constituents involved in the business process, all of sudden you have the basis for a tremendous amount of business value:

- a powerful Build, Measure, Learn loop for continuous improvement
- collaboration between disparate teams while still allowing the teams to have the freedom to choose tools and processes that make them most efficient and productive
- visibility and traceability between stakeholders that are the underpinnings of business insight

So integration matters fundamentally. At a macro level there is simply more demand for technologists than there is supply and this divergence is growing. Many people outside of technology are recognizing this e.g., in the 2013 State of the Union, President Obama called for the country to produce a million more STEM graduates in the next decade. That’s one way of solving the problem. We believe that if you can reduce the failures and delays in software delivery, if collaboration across the silos becomes the norm versus the exception, and wisdom can be gleaned form the business process of software delivery, software delivery productivity will go up dramatically. We believe the smooth flow of information between the people who need that information to do their jobs is the missing and required element to driving these outcomes.

To learn more and for a step by step methodology to help you make the business case for SLI in your organization, please see the Business Case for Software Lifecycle Integration (SLI) white paper (registration required) or contact us. We’ve got a ton of white papers, videos, and other resources to help you learn more at www.tasktop.com/SLI. Please provide feedback and help us grow the SLI community by participating.

Watch Tasktop webinars

Testing for the API Economy

by David Green, June 4th, 2013

Creating integrations is hard, but testing them is even harder. Every web service API has its own vocabulary, semantics, nuances, and bugs. Every release of a web service potentially involves breaking changes, syntactic and behavioral. When these web services are controlled by 3rd parties, it gets even harder. As creating integrations is our business, we set out to improve how we create integrations; to create a scalable, reliable method of creating high quality integrations taking into account the continuously shifting landscape of integration end-points. Here are the hard-won best practices that we’ve developed which you can apply to your integrations.

The Integration Factory

We created an Integration Factory. Factory is a great name for it since it involves a highly repeatable process building lots of things that are the same – but it begs the question, what is an Integration Factory? Though it shares some similarities to software factories, especially in the use of manufacturing techniques, it doesn’t apply the model-driven code generation techniques frequently associated with the term. Integration Factory is really just a fancy term for:

  • The Integration Specification (or the “spec”)
  • The TCK (Technology Compatibility Kit)
  • Connectors, which are implementations of the specification (these are the integrations)
  • A build and test environment, which enables testing integrations against 3rd party systems
  • Reporting, which provides an incredible level of detail on correctness and TCK conformance
  • A delivery process for evolving the spec, TCK and connectors
    • Continuous Integration
    • Code reviews
    • Build triggers

It’s a set of technologies, an approach, a methodology, a repeatable process for creating robust, high quality integrations.

Connectors

Having a common API for creating integrations is an essential step in creating a factory. Tasktop Sync uses Mylyn Tasks as the API for integrations. Mylyn Tasks is a fantastic API and framework developed originally for IDE integrations, enabling developers to bring ALM artifacts (tasks, bugs, change requests, requirements, etc.) into their IDE. At the core of this API is a common data model for ALM artifacts and an API for performing basic CRUD and search operations. We call implementations of this API “connectors”. We have lots of connectors; one for each of Atlassian JIRA, Microsoft TFS, IBM RTC, IBM RRC, HP Quality Center and HP ALM, CA Clarity, etc.

Mylyn Connector Overview

Figure 1: a typical Mylyn connector as it relates to API

While a common data model and API enables us to connect any end-point with any other end-point when synchronizing ALM artifacts, the API on its own is not enough.

We need to know that each connector implements the API correctly. We need to know the capabilities of the connector and of the web service, and how they’re different for each version of the web service, and if it changes as new versions of the web service are released. We need to know what works and what doesn’t and why; is it a shortcoming of the connector implementation, or a limitation of the web service? This leads us to the Connector TCK.

Connector TCK

During one of our innovation-oriented engineering Ship-It days, one of our engineers prototyped a set of generic tests that could be configured to run against any connector. Why not apply the concept of a TCK to connectors? Benjamin dubbed his creation the Connector TCK, and the name stuck. The Connector TCK would have tests that ensure that every connector is implemented correctly, and tests the capabilities of each implementation.

Connector TCK Overview

Figure 2: the Connector TCK

The kinds of tests that were added to the Connector TCK varied from the most basic (e.g. a connection can be established with a repository) to more detailed (e.g. a file attachment to an artifact can be created and retrieved correctly with non-ASCII characters in the attachment file name). The beauty of the Connector TCK is that it could be used to measure the quality and capabilities of every connector equally. It could be configured to run a connector against multiple versions of a repository, in fact we test as many versions as we believe necessary to ensure correct behaviour for any supported version of an integration end-point.

Connector TCK - Testing Versions

Figure 3: testing a connector with multiple versions of a repository

Having a Connector TCK is great – but we’ve missed the essential question: what tests should be in it? The only way to know for sure is to have a definitive contract, a specification.

A Specification

For some software engineers requirements aren’t glamorous, exciting or even really that interesting. When we sit in front of a keyboard, the first thing we want to do is to start hammering out code. This is analogous to framing a house without blueprints. Sure, it’s fun – but the house won’t be what we want in the end. The Integration Specification (the “spec”) is the blueprint that spells out the desired behaviour of integrations. The spec takes the following form:

  • User Stories (US) – stories written from the user’s perspective that define the functionality of integrations
  • Technical User Stories (TUS)- stories written from the technology perspective that map to the connector API
  • Acceptance Criteria (AC) – criteria that must be satisfied in order for a technical user story to be considered complete

Here’s an example from the spec:

  • US-2: Connector client can set up a connection to a repository
    • TUS-2.1: Connector client can establish a connection with the repository server given the URL, credentials and other necessary connection parameters
      • AC-2.1.1: Connector client can validate URL, credentials and other necessary connection parameters and receive feedback of successful connection
      • AC-2.1.2: Connector client receives meaningful feedback for invalid or missing URL
      • AC-2.1.3: Connector client receives meaningful feedback for invalid or missing credentials
      • AC-2.1.4: Connector client…
    • TUS-2.2: Connector client can…

Normal software development often involves building features to a spec (or without a spec) and moving on. In our case where we’re building many integrations that essentially do the same thing, we get a lot of mileage out of the spec. TUSs and ACs in the spec apply to every connector implementation, of which there are many. So we value the spec with a kind of reverence that is unusual for software engineers.

Pulling It Together

The magic in this process really comes to light when we pull it together. Using JUnit and its powerful TestRule concept, we are able to connect our tests with ACs from the spec using a simple annotation:

@Retention(RUNTIME)
@Target(METHOD)
public @interface Validates {
	/**
	 * Provides the IDs of the acceptance criteria.
	 */
	String[] value();
}

Here’s an example of the annotation in use:

	@Test
	@Validates("2.1.2")
	public void testMissingUrl() {
		// test it
	}

With this simple technique, we can report on test results within the context of the specification. The test report takes on a whole new significance: it’s now a report on TCK compliance and connector capabilities. We can now definitively say which features are working and which are not for any integration, and easily determine differences when testing against new versions of a web service.

TCK Reporting

Figure 4: TCK compliance reporting

What Comes Next?

In the API economy it’s hard, but possible to create high quality integrations. We’ve taken a look at some of the concepts behind an Integration Factory which make it a lot easier. In the next installment we’ll look at other aspects of an Integration Factory, including build and test environments and the delivery process.

Watch Tasktop webinars

Keeping Our Eyes Wide Open

by Gail Murphy, May 21st, 2013

There was more going on in San Francisco than the Bay to Breakers race this past weekend (May 18-19). The 35th International Conference on Software Engineering, the premier software engineering conference, also began and will run from May 18-26. ICSE, as it is known in the research community, has attracted more than 1000 people from 50 countries to the Bay Area for over a week of communicating new advancements and best practices in software engineering. Tasktop is a sponsor this year and will be participating in events such as the student-industry lunch, where 300 students will have a chance to exchange ideas with and hear about opportunities at sponsoring companies. With Tasktop’s current growth, we are eager to meet these high-caliber students!

But Tasktop has even deeper roots with ICSE. A fundamental aspect of Tasktop’s vision has always been to improve communication and collaboration amongst the people involved in software development so as to truly connect the world of software delivery. The initial step Tasktop took towards this vision was to embed the concept of a task into the IDE as part of the Eclipse Mylyn project. When Mik Kersten, our CEO, started the Mylyn project in the UBC Software Practice lab, the need for tooling to connect the IDE to common issue repository systems quickly became evident. Luckily, a connector that allowed issues from Bugzilla to be brought into the Eclipse IDE was available within the Software Practices Lab. This connector had been built as part of the Hipikat project . Hipikat recommends items from a software project’s history, such as past bug reports, source code commits and email messages, that might be useful to a developer currently trying to perform a task on the project. In essence, Hipikat serves as a memory of the entire project built from the project repositories so that it can answer a question that you might have asked someone at the water cooler if they had only not left the project. The starting point for many Hipikat queries is an issue or a bug. For instance, a developer may select a bug he or she wants to work on and ask Hipikat for similar bugs that have been solved in the past. As a result, Hipikat needed a means of having bugs in the IDE, which caused the initial development of a Bugzilla connector. Shawn Minto, who built the Bugzilla connector, is one of Tasktop’s most experienced software engineers.

On Friday of the conference, Davor Çubranić, who conceived and built Hipikat as part of his Ph.D. work at UBC, and myself, will receive the “Most Influential Paper 10 Years Later” Award for the paper about Hipikat. Our paper is receiving the award, as it catalyzed substantial work on recommenders for software engineering, some of which are finding their way into practice today. For example, more and more sophisticated code completion recommendations are finding their way into the Eclipse Java editor.

Hipikat’s name means “eyes wide open” in the West African language Wolof. Keeping your eyes wide open is as critical today for tackling the hard problems of software engineering as it was ten years ago. Each day, Tasktop strives to keep its eyes wide open when tackling the challenges that come with with connecting the world of software delivery. Will you be attending the ICSE Conference? If so, please tweet me at @gail_murphy.

Watch Tasktop webinars

Tasktop 2.7 Has Been Released

by Dave West, May 13th, 2013

On Friday, May the 10th, Tasktop released 2.7 for both Tasktop Sync and Tasktop Dev. This continues to demonstrate our desire to put out a major release every six months and a minor release every three months. This regular cadence helps manage scope and deliver value to our customers in a managed and controlled way. Version 2.7 was a major release with many new features, bug fixes and improvements, but I want to focus on two main themes.  The first is the release of our first PPM connector for CA Clarity PPM, the second is improvements to our IBM Rational Requirements Composer connector. Both demonstrate our continued desire to connect the world of software delivery by enabling different tools and disciplines to work from the same data and collaborate more effectively.

Support for Clarity PPM

For many developers, the world of the project office is an alien one, with its staff talking about investment portfolios, resource pools and demand management. The same can be said of the PMO when trying to understand developers who work in scrums and talk about CI and GitHub. But with the advent of faster delivery times and Agile methods, development and the PMO need to work together in more dynamic, flexible and aligned ways. That means traditional integration approaches, such as spreadsheets and email, need to be replaced with automated integration. This need led us to develop a connector for CA Clarity PPM, which enables the two teams to work together more effectively sharing work across organizational and tool boundaries. The development of this connector also reinforces our strong partnership with CA and demonstrated our support for CA Clarity Agile and CA Clarity requirements.

Building the connector has reminded us yet again that the technical side of integrating the process and data is often the easiest part. It also reminded us that getting agreement on how the artifacts flow between these two organizations is actually much harder. As we worked on the early version of the connector with a customer, it became very clear that though at the highest level the PMO and development had shared objectives, the reality of day-to-day operation was very different for the two groups. We learned a lot about how the PMO and Development can work together during this process. This learning will form the basis of a webinar titled ‘Connecting CA Clarity PPM with Development Tool Stacks from IBM, HP MS and more’,which not only will demonstrate CA Clarity PPM integrating with the development stack, but will also describe the integration patterns that make sense and the key decisions you need to focus on when building the integration. The best practices of integration continue to drive our investment in Software Lifecycle Integration, where we hope to codify and share these ideas.

Improvements to the RRC connector

As more and more people improve their requirements processes and start adopting tools like IBM RRC, it is clear that requirements can never exist in isolation and that integration is key to delivering software effectively. Requirements tools are great at improving the discipline of requirements, but without linking them to a broader ALM tool stack, the requirements start wrong and just get worse. The key to good requirements is flow and collaboration; flow, meaning that the requirements seamlessly flow between management, the business, development and test, and collaboration, meaning that every stakeholder involved has the ability to comment, discuss and more importantly disagree with the how and why a requirement adds business value. We at Tasktop are heavily involved in this dialogue and continue to improve our requirements connectors as we understand how this interaction plays out. For example, a key improvement in the 2.7 release is the ability to sync into folders between RRC and HP QC / ALM. For many organizations, a folder is more than a way to group large list of requirements; it actually includes some level of business semantics. By adding this capability, we now can share context across tool boundaries.  This is a great example of something that we learned from our customers and partners as we enable better requirements flow and collaboration with Tasktop Sync.

Watch Tasktop webinars