335: Start Messy
The Bike Shed - A podcast by thoughtbot - Marți
Categories:
Steph has a question for Chris: When you have no idea how you're going to implement a feature, how do you write your first test? Chris has thoughts about hybrid teams (remote/in-person) and masked inputs. This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Preemptive Pluralization is (Probably) Not Evil iMask Mitch Hedberg - Escalator Joke This episode is brought to you by Studio 3T. Try Studio 3T's full suite of features for 30 days, no payment details needed. Become a Sponsor of The Bike Shed! Transcript: STEPH: I am recording in a new room because we're in Pennsylvania, and so I'm recording at this little vanity desk which is something. [laughs] But there's a mirror right in front of me, so I feel very vain because it's just like, [laughs] I'm just looking at myself while I'm recording with you. It's something. CHRIS: [laughs] That is something. STEPH: [laughs] So, you know. CHRIS: Fun times. STEPH: Pro podcast tip, you know, just stare at yourself while you chat, while you record. CHRIS: I mean, if that works for you, you know, plenty of people in the gym have the mirrors up, so podcasting is like exercising in a way, and I think it makes sense. STEPH: I appreciate the generosity. [laughs] CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. So I have a funny/emotional story that [laughs] I'm going to share with you first because I feel like it kind of encapsulates how life is going at the moment. So we've officially moved from South Carolina to North Carolina. I feel like I've been talking about that for several episodes now. But this is it: we have finally vacated all of our stuff out of South Carolina house and relocated to North Carolina. And once we got to North Carolina, we immediately had to then leave town for a couple of days. And normally, Utah, our dog, stays with an individual in South Carolina, someone that we found, trust, and love. And he has a great time, and I just know he's happy. But we didn't have that this time. So I had to find just a boarding facility that had really high reviews that I felt like I could trust him with. I didn't even have time to take him for a day to test it out. It was one of those like, I got to show up and just drop him off and hope this goes well, so I did. And everything looks wonderful. Like, the facility is very clean. I had a list of things to look for to make sure it was a good place. But it's the first time leaving him somewhere where he's going to spend significant time in a kennel that has indoor-outdoor access. And as I walked away from him, I started to cry. And I just thought, oh no, this is embarrassing. I'm that dog mom who's going to start crying in this boarding facility as she's leaving her dog for the first time. So I put on my shades, and I managed to make it through the checkout process. But then I went to my truck and just sat there and cried for 15 minutes and called my husband and was like, "I'm doing the right thing, right? Like, tell me this is okay because I'm having a moment." And I finally got through that moment. But then I even called you because you and I were scheduled to chat. And I was like, I am not in a place that I can chat right now. I think I told you when you answered the phone. I was like, "Everything is fine, but I sound like the world's ending, or I sound like a mess." [laughs] And yeah, so I had like two hours of where I just couldn't stop crying. I partially blame pregnancy hormones. I'm going to go with that as my escape rope for now. So I feel like that's been life lately. Life's been a little overwhelming, and that felt like the cherry on top. And that was the moment that I broke. Update: he's doing great. I've gotten pictures of Utah. He's having a wonderful time at camp, it seems. [laughs] It was just me, his mom, who is having trouble. CHRIS: Well, you know, reasonable to worry, and life's dialed up to 11 and all of that. But yeah, I will say even though you lead the conversation with everything's fine, your tone of voice did not imply that everything was fine. So when I eventually came to understand what we were talking about, I hope I was kind in the moment. But I was like, oh, okay, this is fine. We're fine. I'm so sorry you're feeling terrible right now. STEPH: [laughs] CHRIS: But okay, we're fine. For me, there was a palpable moment of like, okay, my stress is now back down a little bit. But I'm glad that things are going well and that Utah is having a fun vacation. STEPH: Yep, he seems to be doing fine. I've calmed down. You know, as you said, life's been dialed up lately. On a less emotional note and something that's a little bit more technical, I had a really great conversation with another thoughtboter where we were talking about testing. And the idea of when you learn testing, it's often very focused on like, you have this object, and it has a method. And so, you're going to write a unit test for this particular method. And it's very isolated, very specific as to the thing that you're looking to test. Versus in reality, when you pick up tickets, you don't have that scope, and like, it is so broad. You have to figure out what feature you're implementing, figure out how to test it. And it feels like this mismatch between how a lot of people learn to test and learn TDD versus then how we actually practice it in the wild. And so we had a phone conversation around when you are presented with a ticket like that, and you have no idea how you're going to implement a feature, how do you get started with testing, and when do you write your first test? Do you TDD? Do you BDD? Or do you PDD? That last one I made up, it stands for Panic-driven development. But it's what's your approach to how do you actually then get to the point where you can write a test? And I have a couple of thoughts. But I'm really curious, how does that flow work for you? What have you learned throughout the years to then help yourself write that first test? Or where do you start? CHRIS: Well, this is an interesting question. I like this one. I think it varies. And I think there's a lot of dogma around TDD as a practice. And I think it is super useful to break that apart and hear different individual stories of it. I know there are plenty of folks who are like, TDD is just not a thing and whatnot, and I'm certainly not in that camp. But I also don't TDD 100% of the time because sometimes I'm not super clear on what I'm doing, or I'm in more of an exploratory phase. That said, I think there's a...I want to answer the question somewhat indirectly, which is I know how to test most of the code that I work on now as a web developer in a Rails application because I've done most of the things a bunch of times. And the specifics may be different, but the like, to integrate with this external system, and I have to build an API client or whatever, I know how to do that. And there is a public API of some class that I will be exercising against and so I can write tests against that. Or I know that the user is going to click a button, and then something needs to happen. And so I can write that test, and it fails, and then it starts to push me towards the implementation. There are also times where it's actually quite hard to get the test to lead you in the right direction, and you have to know what hop to make, and so sometimes I just do that. But yeah, rolling back a little bit, I think there is a certain amount of experience that is necessary. And I think one of the critical things that I want to share with folks that are potentially newer to testing overall is that it is actually quite hard. You have to understand your system and how you're going to approach it, you know, one step removed, or it's like a game of chess where you're thinking a couple of moves ahead. You have to understand it in a deeper way. And so, if testing is difficult, that might just be totally reasonable at this point. And as you come to see the patterns within a Rails application or whatever type of application you're working on over and over, it becomes easier to test. But if testing is hard, that may not mean...like, how do I phrase this? There's like an impostor syndrome story in here of like, if you're struggling with testing, it may not be that something is fundamentally broken. You just may need a couple more chances to see that sort of thing play out. And so, for me, in most cases, I tend to know where to start or when not to. Like, I feel fine not testing when I don't test most of the time. I will eventually get things under test coverage such that I feel confident in that. And whenever I have one of those moments, I will stop and look at it and say, "Why didn't I know how to test this from the front, like, from the start?" But it's rare at this point for things to be truly exploratory. There's always some outer layer that I can wrap around. But like, I know X needs to happen when Y occurs. So how do I instrument the system in that way? But yeah, those are some thoughts. What are your thoughts? Does what I said sound reasonable here? STEPH: Yeah, I really like how you highlighted that pausing for reflection. That was something that I didn't initially think of, but I really liked that, to then go back to be like, okay, revisiting myself a couple of days or however earlier when I first started this. Now I can see where I've ended up. How could I have made that connection sooner as to where I was versus the tests I ended up with? Or perhaps recognizing that I couldn't have gotten there sooner, that I needed that journey to help me get there. So I really like the idea of pausing for reflection because then it helps cement any of those learnings that you have made during that time. Also, the other part where you mentioned the user clicks a button, and something happens, that's where I immediately went with this. I also liked that you highlighted that TDD has that bit of dogma, and I don't always TDD. I do what I can, and it helps me. But it has to be a tool versus something that I just do 100% of the time. But with more of that BDD approach or that very high-level user-level integration test of where if I need to pull data from an API and then show it to the user, okay, I know I can at least start with a high-level test of I want the user to then see some data on a page. And that will lead me down some path of errors. It might help me implement a route and a controller and then a show action, so it will at least help me get started. Or even if it doesn't give me helpful enough errors, it at least serves as my guideline of like, this is my North Star. This is where I'm headed. So then, if I need to revisit, okay, what's the thing that I'm focused on at the moment? I can go back and be like, okay, I'm focused on achieving this. What's the next smallest step I can take to get there? The other thing that I've learned over time is I've given myself the chance to be messy because I got so excited about the idea of unit testing and writing small, fast test that I would often try to start with small objects and then work my way backwards into like, okay, I have this one object that does this thing and one object that like...let's use a concrete example. So one object that knows how to communicate with API and one object that knows how to then parse and format the data I want and then something else that's then going to present that data to the user. But I found when I started with small objects, I would get a little lost, and I wasn't always great at bringing them together. So I've taken the opposite approach of where if I'm really not sure where I'm headed and I'm in that more exploratory phase or even just that first initial parse of a feature, I will just start messy. So if I am pulling data from an API and need to show it to a user on a screen, I'll just dump it in the controller if I need to. I'll put it all there together. And then once I actually have something that is parsing, or I have something appearing on the page, then I will start to say, "Okay, now that I can see what I need and I can see the pieces that I've written, how can I then start to extract this into smaller objects?” And now, I can start writing unit tests for that data. So that is something that has helped me is just start high, keep it high, be messy, and until you start to see some of the smaller objects that you can pull out. CHRIS: Yeah, I think there's something that you were just saying there that clicked for me of we didn't start with the why of TDD. And I don't think we've talked about why we believe in TDD in a while. So this feels like a thing we're saying. It's not good just because it's good, or we don't believe it's good just because that's what we say. For me, it is because it anchors us outside of the code sort of it starts to think of it from the user perspective or some outer layer. So even if you're unit testing some deeply nested class within your application, there's still an outer layer. There's still a user of that class. And so, thinking about the public API, I think is really useful. And then the further out you get, the better that is, and I believe strongly in thinking from the outside in on these sort of things. And then the other thing you said of allowing for refactoring. And if we have tests, then it's so much easier to sort of...I totally 100% agree with like; I start messy. I start very messy. I wanted to pretend that I was going to be like, oh, I'm so...Steph, I can't believe this. But no, of course, I start messy. Why would you start trying to do the hard thing first? No, get something that works. But then having the test coverage around that makes it so much easier to go through those sequential refactoring steps. Versus if you have to write the code correctly upfront and then add test coverage around that, it sort of inverts that whole thing. And so, although it may take a little bit longer to write the tests upfront, I do exactly what you're describing of like, I write the tests that tell some truth about the system and constrain the system to do that thing. And then I can have a messy implementation that I can iteratively refactor over and over, and I can extract things from. And then, I can tell a more concise testing story about those. And so it really is both the higher-level perspective I think is super useful and then the ability to refactor under that test coverage is also very useful. And it makes my job easier because I can start messy. I love starting messy. It's so much better. STEPH: Yeah, and I think former me had the idea that for me to do TDD properly meant that I had these small, encapsulated objects that I wrote unit tests for. And yes, that is the goal. I do want that, but that doesn't mean I have to start there. That is something that then I can work my way towards. That also falls in line with the adage from Sandi Metz that the wrong abstraction is more costly than no abstraction. And so I'd rather start with no abstractions and then start to consider, okay, how can I actually move this out into smaller objects and then test it from there? There's also something that I heard that I haven't done as often, but I really liked the idea; it feels very freeing, is that when you do get started and if you write your first test, if you write a test and it helps you make some progress but then you come back to it later and you're like, you know, the test doesn't really add value, or it's not helping me anymore, just thank it and delete it and move on. Just because you wrote it doesn't mean it needs to stay. So if it provided some benefit to you and helped you through that journey of adding the feature, then that's wonderful. But don't be timid about deleting it or changing it so that it does serve you because otherwise, it's just going to be this toxic test that gets merged into the main branch, and it's going to be untrustworthy. Or maybe it's fussy and hard to please, or it's just really not the supportive test that you're looking for. And so then you can turn it into more of a supportive test and make it fit your goals instead of just clinging to every test that we've written. CHRIS: I like the framing of tests as scaffolding to help you build up the structure. But then, at the end, some of the scaffolding gets ripped away and thrown out. And I do think, again, testing ends up in this weird place. The dogmatic thing that we were talking about earlier feels very true. And I've noticed, particularly on larger teams, folks being very hesitant to delete tests like, that feels like sacrilege. Of course, you can't delete tests; the tests are how we know it's true, which is true, but you can interrogate that. You can see like, how true is it? And every test has a cost and maintenance burden, runtime, et cetera. You probably know well, Steph, about having test suites that take a bunch of time to run and then maybe wanting to spend a little bit of time trying to reduce that overall time. And so there's always going to be a trade-off there. Actually, someone reminded me of an anecdote recently. I joined a project, and most of the test suite or all of the test suite was commented out because it was flaky or intermittent. And I was like, "Oh, I'm going to delete that." And people were like, "You're what?" I'm like; it's commented out. We're not using it. Let's tell the truth. Git will have it. We can go back and get it. But let's tell the truth with what we're like...this commented-out test suite is almost worse in my mind than having nothing there. The nothing feels painful, right? Let's experience that. Whereas the commented out stuff is like, well, we have a test suite; it's just commented out. It's like, no, you don't have a test suite at all. That's not what's going on here. But there were other thoughtboters on the project that poked a good amount of fun at me when they were like, "The first thing you did on this project was delete the test suite?" As I was like, "Yeah, I don't know, I was feeling spicy that day or something." But I think the test suite needs to serve the work that we're doing in the same way that everything else does. And so occasionally, yeah, deleting tests is absolutely the right thing and then probably add back some more. STEPH: It's funny how that reaction exists. And I've done it before myself where like, if you see commented out code and you put up a PR to remove it, I feel like most people are going to be like, yeah, yeah, that's great. Let's get rid of this. It's clearly not news. It's commented out. But then removing a skipped test then has people like, "Well, but that test looks like it could be valuable, and we're going to fix it." And it's like...all I can go back to is that silly example of like, you've got your skinny jeans, one day I'm going to fit into those skinny jeans. And so one day, I'm going to fix this test, and it's going to serve the purpose. And it's going to be the me I want to be. [laughs] And it is funny how we do that. With code, we're like, sure, we can get rid of it. But with tests, we feel this clinginess to them where we want to hold on to it and make it pass. And I think that sometimes has to do with the descriptions. There are test descriptions commented out that I've seen are like, user can log in, or if given a user without permission, they can't access. And it's like, oh, that sounds important. I'm now nervous to delete you versus fix you, but you're still not actually running and providing value. And so then I have to negotiate with myself as to where do we actually go from here? But I do love the idea of deleting tests that are skipped because we should just let them go. We either have to dedicate time to fix them or let them go and make that hard decision. CHRIS: The critical idea of future me will have more time, future me will be calm and will work through all the other bugs and future discounting; as far as I understand it as a formalization of the term, yeah, it's never true. I've only gotten busier over time, just broadly speaking. And that seems to be a truism in software projects as well. It's like, oh, we just have to write a bunch of features, and then it'll be calm. I don't even think I'd want that. But future me will not have more time. And so choosing the things that we do invest in versus not is tricky, but the idea of that future me will have a lot of time or future us probably not true. STEPH: Well, I think the story that I just shared at the beginning of our chat highlights that future me won't always be calm. [laughs] So let's work with what I've got. Let's not bank on that. Future Stephanie might be very emotional about dropping her dog off at boarding for a couple of days. [laughs] Future me might be very emotional about fixing this test. All right, well, thanks for going on that journey with me. That's really helpful. I knew you'd have some great insights there. Mid-roll Ad: Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: What's going on in my world? Last week we had our first ever Sagewell all-hands get-together in person. Many of us have met in person before, but not everyone. And so this was a combination celebration for our seed fundraising round, which had happened actually sometime right at the end of last year. But due to COVID in the world and complexity, it was difficult to get everybody together. So that finally happened. And then we sort of grafted on to that celebration, that party that we were having. Like, let's just extend a day in either direction and do some in-person working and all of that. And that was really great. I'm trying to find that ideal middle ground between we are a remote team, but there is definitely value in occasionally being in person, particularly getting to know people but also just having some higher bandwidth conversations, planning, things like that. They just feel different in person. And so, how do we balance that? And how do we be most productive and all that? But it was really great to meet the team more so than I had on the internet and get to spend some time in person and do some whiteboarding. I drew on a whiteboard with a team. We were all looking at the same whiteboard. We're in the same room. And I drew on a whiteboard some entity relationship diagrams. It was awesome. [laughs] It was super fun. It was one of those cases where we had built an assumption deeply into our codebase, and suddenly instead of having one of a thing, we may now have multiple of a thing. There's a wonderful blog post by Shawn Wang called Preemptive Pluralization which I think is based on an episode of Ben Orenstein's podcast, The Art of Product, where Ben basically framed the idea of like, I've never regretted pluralizing something earlier. A user has one account; they have multiple accounts. They just happen to have one at this time, et cetera. So we're in one of those. And it was a great thing to be able to be in a room and whiteboard. I knew at the time when I did it way back when that I was making the wrong decision. But I didn't know exactly how and the shape. And so now we have to do that fun refactoring so glad that we have a giant test suite that will help us with said refactoring. But yeah, so that was really great to be able to do in person. STEPH: I think there can be so much value in getting together and getting to see your team and, like you said, have those high-level conversations and then just also getting to hang out. So it's really nice to hear that reinforced since you experienced that same positivity from that experience. Do you think that's something that y'all will have going forward? Do you think you're going to try to get together like once a year, once a quarter? Maybe it hasn't even been talked about. But I'm hearing that it was great and that maybe there will be some repeats. CHRIS: Yes, yeah. I think I'm inclined to quarterly at a minimum and maybe even slightly more than that. Some of us are centered around Boston, and so it's a little bit easier for us to pop in and work at a WeWork, that sort of thing. But I think broadly, getting the team together and having that be intentional. And personally, I'm inclined to that being more social time than productive time because I think that's the thing that is most useful in person is building relationship and rapport and understanding folks better. I remember so pointedly when thoughtbot would have the annual Summer Summit, and leading up to that; there was a certain amount of conversation. But there were also location-specific rooms, and a lot of the conversation happened like in the Boston channel or whatnot. And then, without fail, every year after the Summer Summit, suddenly, there was a spike in cross-team chatter. Like, the Ruby room now had a bunch of people from San Francisco talking to Boston, talking to New York, et cetera. And it was just this incredibly clear...I think we could actually, like, I think at one point someone plotted the data, and there's just this stepwise jump that would happen every time. And so that sort of connecting folks is really what I believe in there. And the more we're leaning into the remote thing, then the more I think this is important. So I think quarterly is probably the lowest end that I would think of, but it might be more. And it's also a question of like, what shape does this take? Is it just us going and hanging out somewhere? Or are we productively trying to get together with a whiteboard? I think we'll figure that out as we go on. But it's definitely something that I'm glad we've done now, set the precedent for, and we'll hopefully do more of moving on. STEPH: Yeah, I always really love the thoughtbot Summits. In fact, we have one coming up. It's coming up in May, and this one's taking place in UK. But there have been some interesting conversations around Summit because before, it was the idea that everybody traveled. But typically, they were in Boston, so for me, it was particularly easy because it was already where I lived. So then showing up for Summit was no biggie. But with this one happening in UK and COVID and travel still being a concern, there's been more conversations around; okay, this is awesome. People who want to get together can. There are these events going on. But there are people who don't want to travel, don't feel up to travel. They have family obligations that then make it very difficult for them to leave one partner at home with the kids. And I myself I'm in that space where I thought really hard about whether I was going to travel or not. And I've decided not to just for personal reasons. But then it brings up the question of okay, well, if we have a number of people that are going to be in person together, then what about the people who are remote? And the idea of running something that's hybrid is not something that we've really figured out. But those that are remote, we're going to get together and figure out what we want to do and maybe what's our version of our remote summit since we're not going to be traveling. But I feel like that's definitely a direction that needs to be considered as teams are getting in person because if you do have people that can't make it, how can you still bring them in so it's an inclusive event but respect to the fact that they can't necessarily travel? I don't know if that's a concern that every team needs to have, but it's one that I've been thinking about with our team. And then I know others at thoughtbot we've been considering just because we do have such a disparate team. And we want to make everybody comfortable and feel included. CHRIS: Yeah, as with everything in this world, there's always complexities and subtlety. Thankfully, for our first get-together, we were able to get everyone into the same space. But I do wonder, especially as the team grows, even just scheduling, the logistics of it become really complicated. So then does the engineering team have get-togethers that are slightly different, and then there's like once yearly a big get-together of the whole team? Or how do you manage that and dealing with family situations and all that? It is very much a complicated thing that thankfully was very straightforward for us this first time, but I fully expect that we'll have to be all the more intentional with it moving forward. And, you know, that's just the game. But switching gears ever so slightly, we did have a fun thing that we've worked on a little bit over the past few weeks. We've finally landed it in the app. But we were swapping out our masked input library that we were using, so this is for someone entering their birthday, or a phone number, or social security number, or dates. I guess I already said dates. Passwords I think we also use here. But we have a bunch of different inputs in the app that behave specially. And my goodness, is this one of those things that falls into the category of, oh yeah, I assume this is a solved problem, right? We just have a library out there that does it. And each library is like, oh no, all of the other libraries are bad. I will come along, and I will write the one library to solve all of the problems, and then we'll be good. And it is just such a surprisingly complicated space. It feels like it should be more straightforward. And as I think about it, it's not; it's dealing with imperative interactions between a user and this input. And you need to transform it from what happens when you hit the delete key? What do you want to happen? What's the most discoverable for every user? How do we make sure they're accessible? But my goodness, was it complicated. I think we're happy with where we landed, but it was an adventure. STEPH: I'll be honest, that's something that I haven't given as much thought to. But I guess that's also I just haven't worked with that lately in terms of a particular library that then masks those inputs. So I'm curious, which library were using before, and then which one did you switch to? CHRIS: That's a critical piece of information that I have left off here. So for the previous one, we were using one called svelte-input-mask, which, again, part of the fun here is you want to have bindings into whatever framework that you're using. So svelte-input-mask is what we were using before. We have now moved on to using iMask, which is not like the thing you wear on your face, but it is the letter I so like igloo, Mike, et cetera, I-M-A-S-K, iMask. And so that is a lower-level library. There are bindings to other things. But for TypeScript and other reasons, we ended up implementing our own bindings in Svelte, which was actually relatively straightforward. Again, big fan of Svelte; it's a wonderful little framework. But that is what we're using now, and it is excellent. It's got a lot of features. We ended up using it in a slightly more simple version or implementation. It's got a lot of bells and whistles and configurations. We went up the middle with it. But yeah, we're on iMask, which also led to a very entertaining moment where it was interacting with our test suite in an interesting way. And so, one of the developers on the team searched for Capybara iMask. [laughs] And I forget exactly how it happened, but if you Google search that, for some reason, the internet thinks an iMask is a thing that goes over your mouth. And so it's a Capybara, like the animal, facemask. It's very confusing, but this got dropped into our Slack at one point, someone being like, "I searched for Capybara iMask, and it got weird, everybody." So yeah, that was a fun, little side quest that we got to go on. STEPH: [laughs] I just Googled it as you told me to, and it's adorable. Yeah, it's a face mask, and it has a little capybara cartoon on the front of it. Yeah, there are many of these. [laughs] CHRIS: When I think of an iMask, though, it's the thing that you put over your eyes to block the light if you want to sleep. But they're like, an iMask like, a mask that still keeps her eyes outside of it. I don't understand the internet. It's a weird place. STEPH: I think that was just Google saying Capybara iMask. Nope, don't know I, so let's put together Capybara mask, and that's what you got back. [laughs] CHRIS: I guess, yeah. It's just a Capybara mask. And I'm projecting the ‘I’ because I phonetically heard that for a while. Anyway, yes. But yeah, masked inputs so complicated. STEPH: This is adorable. I feel like there should be swag for when people move. Like when people find things like this, this is the type of thing that then I stash and then wait for their anniversary at the company, and then I send it to them to remind them of this time that we had together. [laughs] There was also a moment where you said, ‘I.’ You were explaining I as in in the letter I, not E-Y-E for eyemask. And you said igloo, and my brain definitely short-circuited for a minute to be like, did he just say igloo? Why did he say igloo? And it took me a minute to, oh, he's helping phonetically say that this is for the letter I. CHRIS: Yep. The NATO phonetic alphabet that if you don't explain that that's what you're doing, now I'm just naming random other objects in the world. Sorry. STEPH: [laughs] CHRIS: And that's why I cut myself off halfway through. I'm like, now you're just naming stuff. This isn't helping. STEPH: [laughs] CHRIS: Yes, the letter I, the letter M. [laughs] STEPH: All of that was a delightful journey for me, and I was curious. I'm glad you brought the test because I was curious if y'all are testing if things are getting obscured, but it sounds like y'all are, which is what helped give you confidence as you were switching over to the new library. CHRIS: Yeah, although to name it, we're not testing at a terribly low level. This is a great example of where I believe in feature specs. Like, within our Capybara feature spec, we are saying, and then as a user, I type in this value into the input. And critically, although this input needs to have special formatting and presentational behavior, it should functionally be identical. And so it was a very good litmus test of does this just work? And then, actually, our feature specs ended up in a race condition, which is just an annoying situation where Capybara moves so quickly that it represented a user. But as we were having that conversation, I was like, wait a minute; I know that users are slower than a computer. But is this actually an edge case that's real that we need to think about? And I think we did end up slightly changing our implementation. So our feature specs did, in a way, highlight that. But mostly, our feature specs did not need to change to adapt to and then fill in the formatted input. It was just fill in the input with the value. And that did not change at all, but it did put a tiny bit of pressure on our implementation to say, oh, there is a weird, tiny, little race condition here. Let's fix that. And so we did race conditions, no fun at all. STEPH: Interesting. Okay, so y'all aren't actually testing. Like, there's no test that says, "Hey, that when someone types into this field, that then there should be this different UI that's present because then we are obscuring the text that they're putting into this field." It was, as you mentioned, we're just testing that we changed over libraries, and everything still works. So then do you just go through that manual test of, then you go to staging, and then you test it that way? CHRIS: Yeah, that's a great question, yes, although as you say it, it's interesting. I guess there's a failure mode here or that our test suite does not enforce that the formatting masking behavior is happening. But it does test that the value goes through this input, gets submitted to the server, turns into the right type of value in the back end, all of that. And so I guess this is an example of how I think about testing, like, that's the critical bit, and then it's a nicety. It's an enhancement that we have this masking behavior. But if that broke, as long as the actual flow of data is still working, that can't break in a way that a user can't use. It sort of reminds me of the Mitch Hedberg joke, an escalator can never break, and it can only become stairs. And so I'm in that mindset here where a masked input that you have proper feature spec coverage around can never quote, unquote, "break." It can just become a plain text input. STEPH: I love how much that resonates with me. And I now know that when I'm writing tests, I'm going to think back to Mitch Hedberg and be like, oh, but is it broken-broken, or is it just now stairs? Because that's often how I will think of feature specs and how low level I will get with them. And this is on that boundary of like, yes, it's important that we want to obscure that data that someone's typing in, but it's not broken if it's not obscured. So there's that balance of I don't really want to test it. Someone will alert us. Like if that breaks, someone will alert us, and it's not the end of the world. It's just unfortunate. But if they can't sign in or they can't actually submit the form, that's a big problem. So yes, I love this comparison now of is it actually broken, or is it just stairs? [laughs] As a guideline for, how much should we test at this feature level or test in general? What should we care about? CHRIS: I feel like this is a deep truth that I believed for a long time. And I think I probably, somewhere in the back of my head, connected it to this joke. But I feel really good that I formally made that connection now because I feel like it helps me categorize this whole thing. Sorry for the convenience as a joke. And so yeah, that's where we're at. STEPH: For anyone that's not familiar with the comedian Mitch Hedberg, we'll be sure to include a link to that particular joke because it's delightful. And now it's connected to tech, which makes it just even more delightful. CHRIS: I only understand anything by analogy, especially humorous analogy. So this is just critical to my progression as a developer and technologist. STEPH: Yeah, I've learned over the years that there are two ways that I retain knowledge: it either caused me pain, or it made me laugh. Otherwise, it's mundane, and it gets filtered out. Laughter is, of course, my favorite. I mean, pain sticks with me as well. But if it's something that made me laugh, I just know I'm far more likely to retain it, and it's going to stick with me. Mid-Roll AD: And now a quick break to hear from today's sponsor, Studio 3T. When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines, Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free. That's studio3t.com/free. CHRIS: On that wonderful framing there, I think we should wrap up. What do you think? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at [email protected] via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Sponsored By:Studio 3T: When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with Mongo DB, SQL, Oracle and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30 day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required.Scout: Scout APM is leading-edge application performance monitoring designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise-platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve performance abnormalities -- like N+1 queries, slow database queries, memory bloat, and more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers worldwide call Scout their best friend and try our error monitoring and APM free for 14-days, no credit card needed! And as an added bonus for Bikeshed listeners: Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed.Support The Bike Shed