Takeaways from a Pitching Masterclass

Pitching is 95% practice and 5% inspiration. -Annette Kramer

A couple of weeks ago I attended a pitching masterclass by Annette Kramer. It’s part of this strange habit I’ve developed that I watch masterclasses on Youtube to pick up ideas for my own coaching sessions. Now I decided to watch a live one. But more on that in some other blog post…

I learned things about conducting a live masterclass including interaction with the audience. In addition, I also picked up a few ideas that I believe would be useful for testers. Communication skills are among the core skills of testers and I’d say pitching is a subset that could be useful to have in your toolkit.

Format

This masterclass focused on the 3-5 minute pitch format which, according to Annette, is the hardest pitch to do. This is because talking for longer is easier – you have time to expand on your points (though you may water down the content this way…). The masterclass was mostly targeted at different business representatives who need to do pitching to investors or potential business partners. We had a good mix of people: from a startup pitch to looking for partners for school software to a showroom rep. Therefore, it was great to see exampels of pitches for different audiences.

Annette live-coached the pitch makers starting from how they walked to take their spot, to how they stood, how they expressed themselves, and she also re-engineered the flow of their pitch on the fly while also including the audience in giving feedback. It was inspiring to look at how she worked with people.

Selective Hearing: Lessons in Communication

Annette says that when an investor listens to a pitch, they hear “blah blah blah will I make any money on this blah blah blah when will I earn money back blah blah”. When a potential business partner listens to a pitch, they hear “blah blah blah will this make me look good blah blah will I make/lose money with this blah blah”.

How does your audience listen and what is important to them?

Do you focus more on what you want to say or what they want to hear?

During the past year I’ve dealt with C-level and other directors and managers more than I previously have, so this one hits home and is a good reminder. I know I frequently fall into the trap of thinking more about what I want to say and what I hope the effect to be, rather than doing more listening to be able to target the message better. And understand the people I work with better.

From observing a number of testers over the past year, I think there is an important takeaway/reminder here: when talking to your manager (or some other stakeholder/decision maker) about testing, don’t focus so much on your specific testing problem but on the impact of the problem. When focusing on the impact of the problem, you can think of what that manager/stakeholder sees and what they’d like to hear. I bet it won’t be some testing-specific talk about the issue you want to address or the idea you want to introduce. I bet it would help you if they heard “blah blah solving this will make us look good blah blah blah solving this will mitigate the risk of not fulfilling financial goals that  a director somewhere set me blah blah blah”.

That being said, I really liked and I agree with Annette’s proposition that you can’t sell or push ideas on people – it’s much more liberating to think of pitching as a way to offer people an opportunity. So this is why you need to know what your audience cares about and is interested in.

I don’t do pitching on the stage to my managers but I treat some hallway conversations or situations at meetings as micro-pitching opportunities. I have a lot of ideas and I keep looking for ways to get buy-in or traction to take the ideas further. I don’t always know which ones get better traction which means I need to pitch them several times to different people. Which takes me to the next takeaway…

Process. Process Everywhere.

Annette emphasized that it’s useful to think of pitching as a step in the process, and to keep the process in mind (not the result). The goal of the pitch isn’t to close the deal because hey, that hardly happens so easily (or right after a pitch).

The goal of the pitch is to get people asking questions, to keep the conversation going. When I later talked to Annette about it, she said that we do this outside work all the time. And I said, “Oh, this is why we have friends… because we keep the conversation going and this is a process”.

When trying to approach a decision maker with an idea (which probably will take up time (=money) and money, so there are considerations in their mind you may not know about), don’t think of it as a “make or break” situation at the first try. I find myself sometimes doing this exact thing and then getting frustrated. Well, that’s not helpful, is it? And it’s not helpful because if I focus too much on the result, I forget about the process of getting the result. Introducing new ideas in organizations can be difficult, so focusing on the process, focusing on starting and keeping the conversation going is helpful.

Mean It. Clearly.

It was fascinating to watch Annette pick up the difference between when the speaker really meant and believed in what they said and when they didn’t because they focused on what to say (or remembering what to say…). I observed a significant difference in this person’s body language and facial expressions in the pitch after Annette had made some adjustments and asked them some questions to help them discover what they actually meant.

And I mean if I could pick it up, so can you. And other people will pick it up about you. Here’s another takeaway: don’t be abstract, be specific. It will be hard for you to say it like you mean it if the concepts you use are too abstract (and it will be hard to grasp).

Annette did a great job helping people to be specific and get the real meaning out from behind the words. I think this is also a process: you start with an idea, and through practicing a pitch for it, you peel away layers and arrive at the core that will be specific and clear.

We also addressed the issue of using jargon and how this makes attempts at being specific revert to being abstract (“What do you mean by [this thing]?”). I’ve also observed in testers that they tend to use testing jargon when talking to stakeholders who don’t know anything about testing (or don’t care about it…). It’s a similar point to that above: think about the audience you have and what they can understand and want to hear (probably not jargon). Focus on the problem that they feel related to. Annette pointed out why TED talks are so great: among other things, the speakers avoid jargon (so that everyone can understand what they’re talking about).

And don’t talk on autopilot!

Autopilot leads to not really putting yourself in the words you’re saying and you end up “just talking” not delivering a message.

Annette had a great tip for this: remember why you care about what you do/say BEFORE you say it.

***

I’ll blog a bit more on the pitch structure and other takeaways in another post.

Advertisements

What I Learned: Coaching Testers with James Bach and Ann-Marie Charrett

Thanks to the ever charitable Rosie Sherry, I was able to attend the course “Coaching Testers” in Brighton earlier this March. I was expecting to learn about the mechanics of coaching and to reframe and reevaluate my previous (practical but mostly intuition-based) experience. In this post I’m going to give an overview of what I learned from the first half of the training (for the sake of my readers and for the sake of me writing shorter blog posts :)).

What I liked about the course was that after a fairly brief introduction about the coach-tester relationship and the coaching space, we got to work and conducted a brief testing session as a student, then reversed the roles. Just 15+15 minutes later I had learned a couple of lessons.

Firstly, even if the coaching session has a relatively “loose goal” at the beginning, it is important for the coach to pick a trail quickly enough to avoid a situation where there are two lambs (not just one) wandering on the meadow. Of course, it is challenging to pick a suitable topic when you hardly know the person and their skills. I’m thinking that even a preliminary coaching session for mapping the student’s skills and building the relationship is a good start.

Secondly, beware of the magic and mechanics of hearing, listening and processing student’s responses. For example, I picked up on a vague explanation my student gave and I wanted him to be more specific. He explained again. I wasn’t satisfied and applied some more pressure. He explained again. Since I had a fairly specific answer in mind and I didn’t hear it, then… well. Luckily, James was observing this exchange and said “oh, but he IS more specific now!”.

I suddenly realized that while trying to process the student’s answer, I was comparing it to my preconceived answer I would have given. This made me deaf to what he was saying. I felt like I had slapped myself. At least I am now a wiser slapped version of myself.

Discussing it later, I was relieved to hear from James and Ann-Marie what I already suspected: improving the process of  listening, processing, and giving feedback is a matter of mechanics and practice.

Thirdly, focus on what and how the student is doing not on how you could do it better. It doesn’t matter. This is the conclusion Anis drew from coaching me (and I share his sentiment). Demonstration and concrete examples are in order if the student is in the state of “spinning wheels” and it seems to be very difficult for him/her to get back on track; or it may be part of establishing your credibility as a coach. But don’t rush it. It’s about the student’s skills.

These lessons I learned nicely fit into the model of the coaching space that James and Ann-Marie introduced. In this model, the coach and the tester bring similar elements to the coaching arena: both have their context, expectations, abilities, etc, yet these may not be shared through joint experiences (though I think that long-term coaching sessions would increase the overlap to some extent). Both share some of each other’s roles: the coach learns from the student, the student can facilitate coach’s learning. On that arena, there is energy (and trust) between the tester and the coach that needs to be managed. If I remember correctly, the managing of energy was initially attributed to the coach. But I think through the discussions we came to an agreement that energy and trust are to be read and managed by both (if we didn’t agree, then I guess this is how I interpreted it). Yet the portion of managing the pressure belongs mostly to the coach. And then there is direction given by the coach which provides the method for the session.

Using this model, I could describe my lessons as follows:

lack of direction on the coach’s part can make the energy wither so that the coach and the tester wander apart and maybe even leave the arena;

abilities and expectations can be different between the coach and the tester but one can find the other as a source for improvement (I can practice and improve a certain aspect of my listening skills);

giving direction should be administered with care and attention so that the coaching session wouldn’t turn into a training session without planning to do so.

***

These are some initial thoughts in context… I’m still processing the experience, so more to come.

PEST3: Kickstart Learning by Teaching! vol1

Last weekend I had the pleasure of presenting at PEST3 (third Estonian Context-driven Testers’ Peer Conference). The theme for this peer conference was focused on teaching testing. In addition to the presentation, each participant had to present an exercise for teaching testing. This “added bonus” turned out to be pretty fun for me at least.

So what was covered? Here’s the first part of my summary of the presentations.

Risko Ruus talked about how he taught management that things can be different and how he had to do his share of learning in the process. In his case, it appeared that the motivation to learn more came about when he was asked to work on his idea of a test tool to be used for smoke tests. As he admitted, his motivation had been sinking as his duties at work very quite routine and tedious most of the time. So when the opportunity was there, he took it and started teaching himself about programming to create the tool. Once the proof of concept was ready, he successfully sold the idea to management. I must say he has some mad salesman skills or the management happened to be able to grasp the idea without much difficulty 🙂 In any case, he could proceed working on his tool.

What he pointed out was that he always kept in mind the fact that he knew he was going to give the tool to other people to use. This forced him to think about teaching the future users and keeping everything simple and modular.

Eventually, he other team who got their hands on the tool got a bit carried away. Risko explained that they wanted to immediately broaden the scope and start using the tool for things that… Risko hadn’t planned. So here goes another point for teaching using a tool: be very clear and firm about the scope of your teaching. Listen to what your students say but don’t hatch a new plan or solution along the way. Managing the expectations is also a big part of successfully teaching someone.

Risko had a fun testing exercise, namely, juggling. So what does juggling teach you? Consistency, breaking down a complex activity. Risko also said he uses it for defocusing: it’s a good incentive to use in order to drag your butt up from the chair, move around, and give your mind a break. So you’re doing something physical and your mind is concentrated on juggling. But we all know how this works: you’re dealing with something on the surface while your mind solves some issues in the background.  This is how the heureka! moments are born.

Aare Nurm talked about coaching and SBTM. I really was one big ear because I’m interested in hearing about the experiences with implementing SBTM. I’ve been fidgeting with the idea and we occasionally apply this but I haven’t commited to SBTM just yet.

Fortunately, a lot of what Aare was talking about had a familiar ring to it. Important points I took note of:

  • he uses the debriefing session for a different function depending on context. For novice testers, it’s a coaching session. For experienced testers, it’s a learning session where he studies their minds and their test results.
  • getting the size of the session notes right is a difficult task at times. How much is too much or too little? His criteria were that the session notes are sufficient if all of it can be covered during debriefing (if something is left out – too much; if something needs to be checked – too little) and one has to be able to tell what they tested in a couple of weeks’ time.
  • Debriefing may turn into a brainstorming session and this is OK. It is my personal experience that these sessions easily become productive discussions. I haven’t tried to hold it back because I see a lot of good coming from this. I was glad to hear that someone has a similar experience.

I still need to figure out a bunch of administrative things before employing SBTM but I got a few simple but solid ideas that can help me.

Aare’s testing exercise was a sudoku. I haven’t really solved sudokus… I just haven’t picked them up. Aare, of course, gave us just a couple of minutes to solve this. Aaarghh… Well, I got a bunch of numbers right at least. I spent some time trying to figure out the strategies to use: I looked at number of digits in a row/column; how many numbers were in one 3×3 box and which number would most definitely match etc). Well, I guess if I practiced and found more strategies, I’d also become a better… sudoku solver 🙂 Though when talking about teaching testing, sudoku may be good for recognizing patterns, learning to use a proper strategy quickly (especially if you put yourself under a time constraint).