Monday, August 13, 2018

Coaching Competencies


  1. Coaching and Facilitating
  2. Teaching and Mentoring
  3. Domain Mastery
  4. Technical Mastery
  5. Transformational Mastery
  6. Maintain Neutrality
  7. Serve Client Agenda
  8. Reduce Client Dependence
  9. Not Colluding to accommodate dysfunctions
  10. Signature presence / coaching stance
  11. Leadership

Thursday, August 09, 2018

Traditional vs. Agile Organizations

Traditional Organization
Agile Organization
In a traditional firm the organisation is seen as a number of separate departments, each of which is either a cost centre, like IT or HR, or a profit centre like Sales. The difference is that a cost centre doesn’t contribute directly to revenue. Costs are allocated to each cost centre, and cost efficiencies come from reducing the operating costs in the cost centres and maximising the revenue from the profit centres. Common strategies for reducing costs, especially towards the end of the financial year, are limiting travel, reducing variable headcount by laying off contractors and other temporary staff, shifting work to cheaper countries or locations (“moving the work to the people”), repurposing legacy hardware and equipment, and forcing teams to use standard components to minimise “cost per use”. Common strategies for maximising revenue are offering commissions on sales, setting aggressive quarterly targets, and offering incentives and rewards for achieving specific goals. Gains come from maximising resource utilisation, i.e. keeping people busy.

In an agile organisation—in the broader sense of business agility—the organisation is seen as an interconnected system where all departments are value-generating, either directly contributing to the final product or indirectly enabling the organisation to work more effectively. Value chains map the creation of value through the organisation, ending with meeting a specific customer need. Costs and revenues are allocated holistically across the whole value chain, and cost efficiencies come from optimising the flow of value. Common strategies for reducing costs are identifying and eliminating non-value adding activities, limiting the amount of work in process to reduce hidden inventory of work, and assembling multi-disciplinary teams to reduce handoffs and other delays (“moving the people to the work”). Common strategies for maximising revenue are working iteratively and incrementally in small batches to realise value sooner and improve Risk-adjusted Return on Capital, frequent testing of product ideas with customers to identify which work not to do, and enabling teams to choose the most effective tools to reduce their time to market. Gains come from optimising flow efficiency, i.e. keeping work moving.



Dan North

Dan North on Agile Transformation

"Transformation is not a transactional activity that starts and ends and has a budget, and it should not be treated as one". It is never finished, in the same way evolution or adaptation is never finished. The goal is rather to break free from the thinking that led to the incumbent system by introducing a new paradigm, and then to use this to achieve a new steady state where the organisation is responding quickly and effectively to external and internal feedback, behaving like a genuine learning organisation.

Lean Agile Procurement

https://www.lean-agile-procurement.com/mission

DAYS instead of Months
NEEDS instead of Wants
ADAPTIVE instead of Fixed
PARTNERSHIP instead of Relationship
FUN instead of pain

Tuesday, August 07, 2018

DevOps Tools


Good visuals by John Cutler on why agile isn't working...

Changes to scrum


  1. Chicken and pigs
  2. Dev teams no longer "commit", they forecast
  3. Grooming replaced by refinement
  4. The three questions in daily scrum are an option, DS can be conducted in various ways as long as in the context of sprint goal.
  5. Team is responsible for daily scrum instead of scrum master

Interesting reads - Agile, Change Management and Rest...


  1. Coaching Agile Teams by Lyssa Adkins
  2. Narrative Coaching by David Clarke
  3. Nudge - Thaler and Sunstein

In favor of autonomous, feature teams

Erin Koth

I always like posing the question, "What do you do when production is down?"
You get people who have knowledge of the entire application in a room together and empower them to fix the problem.

That's the type of team you want in development, without the extreme pressure of that sort of time crunch

Thursday, August 02, 2018

REST, SOAP and CORBA -- How we got it here...



Source: http://greglturnquist.com/2016/05/03/rest-soap-corba-e-got/


Greg Turnquist

I keep running into ideas, thoughts, and decisions swirling around REST. So many things keep popping up that make me want to scream, “Just read the history, and you’ll understand it!!!”


So I thought I might pull out the good ole Wayback Machine to an early part of my career and discuss a little bit about how we all got here.


In the good ole days of CORBA

This is an ironic expression, given computer science can easily trace its roots back to WWII and Alan Turing, which is way before I was born. But let’s step back to somewhere around 1999-2000 when CORBA was all the rage. This is even more ironic, because the CORBA spec goes back to 1991. Let’s just say, this is where I come in.

First of all, do you even know what CORBA is? It is the Common Object Request Broker Architecture. To simplifiy, it was an RPC protocol based on the Proxy pattern. You define a language neutral interface, and CORBA tools compile client and server code into your language of choice.

The gold in all this? Clients and servers could be completely different languages. C++ clients talking to Java servers. Ada clients talking to Python servers. Everything from the interface definition language to the wire protocol was covered. You get the idea.

Up until this point, clients and servers spoke binary protocols bound up in the language. Let us also not forget, that open source wasn’t as prevalent as it is today. Hessian RPC 1.0 came out in 2004. If you’re thinking of Java RMI, too bad. CORBA preceded RMI. Two systems talking to each other were plagued by a lack of open mechanisms and tech agreements. C++ just didn’t talk to Java.

CORBA is a cooking!

With the rise of CORBA, things started cooking. I loved it! In fact, I was once known as Captain Corba at my old job, due to being really up to snuff on its ins and outs. In a rare fit of nerd nirvana, I purchased Steve Vinoski’s book Advanced CORBA Programming with C++, and had it autographed by the man himself when he came onsite for a talk.

Having written a mixture of Ada and C++ at the beginning of my career, it was super cool watching another team build a separate subsystem on a different stack. Some parts were legacy Ada code, wrapped with an Ada-Java-CORBA bridge. Fresh systems were built in Java. All systems spoke smoothly.

The cost of CORBA

This was nevertheless RPC. Talking to each other required meeting and agreeing on interfaces. Updates to interfaces required updates on both sides. The process to make updates was costly, since it involved multiple people meeting in a room and hammering out these changes.

The high specificity of these interfaces also made the interface brittle. Rolling out a new version required ALL clients upgrade at once. It was an all or nothing proposition.

At the time, I was involved with perhaps half a dozen teams and the actual users was quite small. So the cost wasn’t that huge like today’s web scale problems.

Anybody need a little SOAP?

After moving off that project, I worked on another system that required integrate remote systems. I rubbed my hands together, ready to my polished CORBA talents to good use again, but our chief software engineer duly informed me a new technology being evaluted: SOAP.

“Huh?”

The thought of chucking all this CORBA talent did not excite me. A couple of factors transpired FAST that allowed SOAP to break onto the scene.


First of all, this was Microsoft’s response to the widely popular CORBA standard. Fight standards with standards, ehh? In that day and age, Microsoft fought valiantly to own any stack, end-to-end (and they aren’t today???? Wow!) It was built up around XML (another new acronym to me). At the time of its emergence, you could argue it was functionally equivalent to CORBA. Define your interface, generate client-side and server-side code, and its off the races, right?


But another issue was brewing in CORBA land. The OMG, the consortium responsible for the CORBA spec, had gaps not covered by the spec. Kind of like trying to ONLY write SQL queries with ANSI SQL. Simply not good enough. To cover these gaps, very vendor had proprietary extensions. The biggest one was Iona, an Irish company that at one time held 80% of the CORBA marketshare. We knew them as “I-own-ya'” given their steep price.


CORBA was supposed to cross vendor supported, but it wasn’t. You bought all middleware from the same vendor. Something clicked, and LOTS of customers dropped Iona. This galvanized the rise of SOAP.


But there was a problem





SOAP took off and CORBA stumbled. To this day, we have enterprise customers avidly using Spring Web Services, our SOAP integration library. I haven’t seen a CORBA client in years. Doesn’t mean CORBA is dead. But SOAP moved into the strong position.


Yet SOAP still had the same fundamental issue: fixed, brittle interfaces that required agreement between all parties. Slight changes required upgrading everyone.


When you build interfaces designed for machines, you usually need a high degree of specification. Precise types, fields, all that. Change one tiny piece of that contract, and clients and servers are no longer talking. Things were highly brittle. But people had to chug along, so they started working around the specs anyway they could.


I worked with a CORBA-based off the shelf ticketing system. It had four versions of its CORBA API to talk to. A clear problem when using pure RPC (CORBA or SOAP).


Cue the rise of the web


While “rise of the web” sounds like some fancy Terminator sequel, the rampant increase in the web being the platform of choice for e-commerce, email, and so many other things caught the attention of many including Roy Fielding.


Roy Fielding was a computer scientist that had been involved in more than a dozen RFC specs that governed how the web operated, the biggest arguably being the HTTP spec. He understood how the web worked.


The web had responded to what I like to call brute economy. If literally millions of e-commerce sites were based on the paradigm of brittle RPC interfaces, the web would never have succeeded. Instead, the web was built up on lots of tiny standards: exchanging information and requests via HTTP, formatting data with media types, a strict set of operations known as the HTTP verbs, hypermedia links, and more.


But there was something else in the web that was quite different. Flexibility. By constraining the actual HTML elements and operations that were available, browsers and web servers became points of communication that didn’t require coordination when a website was updated. Moving HTML forms around on a page didn’t break consumers. Changing the text of a button didn’t break anything. If the backend moved, it was fine as long as the link in the page’s HTML button was updated.


The REST of the story


In his doctoral dissertation published in 2000, Roy Fielding attempted to take the lessons learned from building a resilient web, and apply them to APIs. He dubbed this Representational Transfer of State or REST.


So far, things like CORBA, SOAP, and other RPC protocols were based on the faulty premise of defining with high precision the bits of data sent over the wire and back. Things that are highly precise are the easiest to break.


REST is based on the idea that you should send data but also information on how to consume the data. And by adopting some basic constraints, clients and servers can work out a lot of details through a more symbiotic set of machine + user interactions.


For example, sending a record for an order is valuable, but it’s even handier to send over related links, like the customer that ordered it, links to the catalog for each item, and links to the delivery tracking system.


Clients don’t have to use all of this extra data, but by providing enough self discovery, clients can adapt without suffering brittle updates.


The format of data can be dictated by media types, something that made it easy for browsers to handle HTML, image files, PDFs, etc. Browsers were coded once, long ago, to render a PDF document inline including a button to optionally save. Done and done. HTML pages are run through a different parser. Image files are similarly rendered without needing more and more upgrades to the browser. With a rich suite of standardized media types, web sites can evolve rapidly without requiring an update to the browser.


Did I mention machine + user interaction? Instead of requiring the client to consume links, it can instead display the links to the end user and let he or she actually click on them. We call this well known technique: hypermedia.


To version or not to version, that is the question!


A question I get anytime I discuss Spring Data REST or Spring HATEOAS is versioning APIs. To quote Roy Fielding, don’t do it! People don’t version websites. Instead, they add new elements, and gradually implement the means to redirect old links to new pages. A better summary can be found in this interview with Roy Fielding on InfoQ.


When working on REST APIs and hypermedia, your probing question should be, “if this was a website viewed by a browser, would I handle it the same way?” If it sounds crazy in that context, then you’re probably going down the wrong path.


Imagine a record that includes both firstName and lastName, but you want to add fullName. Don’t rip out the old fields. Simply add new ones. You might have to implement some conversions and handlers to help older clients not yet using fullName, but that is worth the cost of avoiding brittle changes to existing clients. It reduces the friction.


In the event you need to REALLY make a big change to things, a simple version number doesn’t cut it. On the web, it’s called a new website. So release a new API at a new path and move on.


People clamor HARD for getting the super secret “id” field from a data record instead of using the “self” link. HINT: If you are pasting together URIs to talk to a REST service, something is wrong. It’s either your approach to consuming the API, or the service itself isn’t giving you any/enough links to navigate it.


When you get a URI, THAT is what you put into your web page, so the user can see the control and pick it. Your code doesn’t have to click it. Links are for users.


Fighting REST


To this day, people are still fighting the concept of REST. Some have fallen in love with URIs that look like http://example.com/orders/523/lineitem/14and http://example.com/orders/124/customer, thinking that these pretty URLs are the be-all/end-all of REST. Yet they code with RPC patterns.


In truth, formatting URLs this way, instead of as http://example.com/orders?q=523&li=14 or http://example.com/orders?q=124&mode=customer is to take advantage of HTTP caching when possible. A Good Idea(tm), but not a core tenet.


As a side effect, handing out JSON records with {orderId: 523} has forced clients to paste together links by hand. These links, not formed by the server, are brittle and just as bad as SOAP and CORBA, violating the whole reason REST was created. Does Amazon hand you the ISBN code for a book and expect you to enter into the “Buy It Now” button? No.


Many JavaScript frameworks have arisen, some quite popular. They claim to have REST support, yet people are coming on to chat channels asking how to get the “id” for a record so they can parse or assemble a URI.


BAD DESIGN SMELL! URIs are built by the server along with application state. If you clone state in the UI, you may end up replicating functionality and hence coupling things you never intended to.









Hopefully, I’ve laid out some the history and reasons that REST is what it is, and why learning what it’s meant to solve can help us all not reinvent the wheel of RPC.