<div dir="ltr">I'd vote for the second approach.<div><br></div><div>A drawback for the VCR approach is that any change/new requirement in the client of one _HTTP_ service results in needing the server for that service building first, effectively eliminating any of the productivity benefits that can come from parallelising the workload. It also helps you to plan out your app/new features more easily, if you are only thinking in terms of PORO for your required responses. Thinking in terms of your domain is easier/ closer to solving the problem than thinking in terms of specific JSON/XML responses.</div>
<div><br></div><div>I stressed HTTP for a reason. The VCR approach also ties you to a synchronous HTTP architecture, whereas a plain ruby library makes it easier to switch out the backend, e.g. to something like RabbitMQ. Although practically, you're probably not likely to have written any client code to be able to easily jump to being async, it does at least make the change feasible.</div>
<div><br></div><div>The complexity you mention of the second/third approaches are IMHO probably almost inevitable. Choosing SOA is making a decision to move towards a more complex architecture for its flexibility. As with any development, software tends to expand out in a fractal like fashion, with each part having similarities to the system as a whole. Programs can be single liners, multi-line scripts, scripts with includes to break into files, scripts that use methods to break down complexity, programs that use classes to break apart functionality, and for SOA - programs that communicate with other programs.</div>
<div><br></div><div>Each part is approximately related to how much the average programmer can hold in his/her head. For SOA, we're just choosing another point to break apart complexity, so that the pieces can be simpler in and of themselves, but as with most simplification, it still introduces complexity at the boundary points. As with any testing, having well tested components tested at the right level internally makes them more stable and easier to rely on. Testing the integration between components is somewhat of an art form, that will always eventually come down to choosing an arbitrary amount of combinations at different levels until you feel confident that things work. </div>
<div><br></div><div>Testing at an integration level, be that functional testing like traditional Rails-style controller testing, or any other non-isolated unit testing, or integration of two different services, or full-stack system tests that hit development replications of the whole system, or browser-based tests that hit staging servers as a user might, or production tests that actually run against production as a health-check, is always going to be about striking a balance. It should be a feedback loop too, making sure to not run too many tests that make the idea of deployment seem impossible because you can never get a full build to work with every single service before some other changes break the build. If the development cycle makes it too difficult to get a change into production before other changes break things again then it probably means you have either too much slow system testing and not enough confidence in the internal services themselves, or the boundary between services was not accurately decided and there's too much coupling between them.</div>
<div><br></div><div>The only real generic advice I can give on testing SOAs would be to avoid too many full system tests or testing at the rather arbitrary HTTP border like WebMock does. Getting enough confidence in your services themselves should be important.</div>
<div>I wrote a blog post recently which is kind of related:</div><div><a href="http://blog.polyglotsoftware.co.uk/blog/2014/02/07/a-pattern-for-stubbing-out-apis-with-rspec/">http://blog.polyglotsoftware.co.uk/blog/2014/02/07/a-pattern-for-stubbing-out-apis-with-rspec/</a><br>
</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 24 June 2014 04:58, Jon Wood <span dir="ltr"><<a href="mailto:jon@ninjagiraffes.co.uk" target="_blank">jon@ninjagiraffes.co.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">My experience so far in moving to an SOA from a big Rails monolith is that you probably need to find a balance between stubbed clients and running requests through the entire stack.<div>
<br></div><div>When unit testing, and for initial smoke tests on the integration side, stubbing out responses works well to keep things moving quickly enough to give yourself a level of confidence, but that doesn't guarantee that the interface you're coding to actually corresponds with reality, so you'll find yourself needing to run the entire stack for integration testing as well. If anyone has any good advice on making that actually happen I'd love to hear it, because we've yet to reach a point of being happy on that side.</div>
<div><br></div><div>Jon</div></div><div class="gmail_extra"><br><br><div class="gmail_quote"><div><div class="h5">On 23 June 2014 19:46, Jonathan <span dir="ltr"><<a href="mailto:j.fantham@gmail.com" target="_blank">j.fantham@gmail.com</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr">Hi there,<div><br></div><div>I'd like to know if any of you have any advice for testing Service Oriented Architectures. Specifically the mocking of boundaries between services. Each team I've worked with in a SOA environment has a different approach to the way they isolate their services from each other for testing. E.g.</div>
<div><br></div><div> - use VCR to connect to the genuine service the first time and save a response</div><div> - create clients for your services, and use mock clients during testing, or stub the real clients explicitly in tests.</div>
<div> - create fake services that respond to the same interface as the real service but just return dummy responses (as in this talk <a href="https://www.youtube.com/watch?v=6mesJxUVZyI" target="_blank">https://www.youtube.com/watch?v=6mesJxUVZyI</a> around 11:45).</div>
<div><br></div><div>Each one seems to have its problems. The first is probably my preferred but I'm encountering some resistance to using it. The second and third are basically the same thing at a different level, and when I've seen these approaches used I've noticed they usually get very complicated. On occasion I've seen them end up with their own test suite.</div>
<div><br></div><div>How do you go about mocking out the interfaces between services in your projects so that test suites can run in isolation? Do you use anything different to what I've just mentioned?</div><div><br>
</div>
<div>Thanks!</div><div>Jono.</div><div><br></div></div>
<br></div></div>_______________________________________________<br>
Chat mailing list<br>
<a href="mailto:Chat@lists.lrug.org" target="_blank">Chat@lists.lrug.org</a><br>
<a href="http://lists.lrug.org/listinfo.cgi/chat-lrug.org" target="_blank">http://lists.lrug.org/listinfo.cgi/chat-lrug.org</a><br>
<br></blockquote></div><br></div>
<br>_______________________________________________<br>
Chat mailing list<br>
<a href="mailto:Chat@lists.lrug.org">Chat@lists.lrug.org</a><br>
<a href="http://lists.lrug.org/listinfo.cgi/chat-lrug.org" target="_blank">http://lists.lrug.org/listinfo.cgi/chat-lrug.org</a><br>
<br></blockquote></div><br></div>