[LRUG] Testing SOAs

Mark Burns markthedeveloper at gmail.com
Mon Jun 23 15:57:02 PDT 2014


Forgot to mention another maybe obvious advantage to the client library
approach.
Refactoring makes more sense if you're refactoring ruby objects. If you do
need to move something from one HTTP service to another, it's easier to do
that if the fact that it's an HTTP service is just an implementation
details behind an object.


On 24 June 2014 06:45, Mark Burns <markthedeveloper at gmail.com> wrote:

> I'd vote for the second approach.
>
> A drawback for the VCR approach is that any change/new requirement in the
> client of one _HTTP_ service results in needing the server for that service
> building first, effectively eliminating any of the productivity benefits
> that can come from parallelising the workload. It also helps you to plan
> out your app/new features more easily, if you are only thinking in terms of
> PORO for your required responses. Thinking in terms of your domain is
> easier/ closer to solving the problem than thinking in terms of specific
> JSON/XML responses.
>
> I stressed HTTP for a reason. The VCR approach also ties you to a
> synchronous HTTP architecture, whereas a plain ruby library makes it easier
> to switch out the backend, e.g. to something like RabbitMQ. Although
> practically, you're probably not likely to have written any client code to
> be able to easily jump to being async, it does at least make the change
> feasible.
>
> The complexity you mention of the second/third approaches are IMHO
> probably almost inevitable. Choosing SOA is making a decision to move
> towards a more complex architecture for its flexibility. As with any
> development, software tends to expand out in a fractal like fashion, with
> each part having similarities to the system as a whole. Programs can be
> single liners, multi-line scripts, scripts with includes to break into
> files, scripts that use methods to break down complexity, programs that use
> classes to break apart functionality, and for SOA - programs that
> communicate with other programs.
>
> Each part is approximately related to how much the average programmer can
> hold in his/her head. For SOA, we're just choosing another point to break
> apart complexity, so that the pieces can be simpler in and of themselves,
> but as with most simplification, it still introduces complexity at the
> boundary points. As with any testing, having well tested components tested
> at the right level internally makes them more stable and easier to rely on.
> Testing the integration between components is somewhat of an art form, that
> will always eventually come down to choosing an arbitrary amount of
> combinations at different levels until you feel confident that things work.
>
> Testing at an integration level, be that functional testing like
> traditional Rails-style controller testing, or any other non-isolated unit
> testing, or integration of two different services, or full-stack system
> tests that hit development replications of the whole system, or
> browser-based tests that hit staging servers as a user might, or production
> tests that actually run against production as a health-check, is always
> going to be about striking a balance. It should be a feedback loop too,
> making sure to not run too many tests that make the idea of deployment seem
> impossible because you can never get a full build to work with every single
> service before some other changes break the build. If the development cycle
> makes it too difficult to get a change into production before other changes
> break things again then it probably means you have either too much slow
> system testing and not enough confidence in the internal services
> themselves, or the boundary between services was not accurately decided and
> there's too much coupling between them.
>
> The only real generic advice I can give on testing SOAs would be to avoid
> too many full system tests or testing at the rather arbitrary HTTP border
> like WebMock does. Getting enough confidence in your services themselves
> should be important.
> I wrote a blog post recently which is kind of related:
>
> http://blog.polyglotsoftware.co.uk/blog/2014/02/07/a-pattern-for-stubbing-out-apis-with-rspec/
>
>
> On 24 June 2014 04:58, Jon Wood <jon at ninjagiraffes.co.uk> wrote:
>
>> My experience so far in moving to an SOA from a big Rails monolith is
>> that you probably need to find a balance between stubbed clients and
>> running requests through the entire stack.
>>
>> When unit testing, and for initial smoke tests on the integration side,
>> stubbing out responses works well to keep things moving quickly enough to
>> give yourself a level of confidence, but that doesn't guarantee that the
>> interface you're coding to actually corresponds with reality, so you'll
>> find yourself needing to run the entire stack for integration testing as
>> well. If anyone has any good advice on making that actually happen I'd love
>> to hear it, because we've yet to reach a point of being happy on that side.
>>
>> Jon
>>
>>
>> On 23 June 2014 19:46, Jonathan <j.fantham at gmail.com> wrote:
>>
>>> Hi there,
>>>
>>> I'd like to know if any of you have any advice for testing Service
>>> Oriented Architectures. Specifically the mocking of boundaries between
>>> services. Each team I've worked with in a SOA environment has a different
>>> approach to the way they isolate their services from each other for
>>> testing. E.g.
>>>
>>>  - use VCR to connect to the genuine service the first time and save a
>>> response
>>>  - create clients for your services, and use mock clients during
>>> testing, or stub the real clients explicitly in tests.
>>>  - create fake services that respond to the same interface as the real
>>> service but just return dummy responses (as in this talk
>>> https://www.youtube.com/watch?v=6mesJxUVZyI around 11:45).
>>>
>>> Each one seems to have its problems. The first is probably my preferred
>>> but I'm encountering some resistance to using it. The second and third are
>>> basically the same thing at a different level, and when I've seen these
>>> approaches used I've noticed they usually get very complicated. On occasion
>>> I've seen them end up with their own test suite.
>>>
>>> How do you go about mocking out the interfaces between services in your
>>> projects so that test suites can run in isolation? Do you use anything
>>> different to what I've just mentioned?
>>>
>>>  Thanks!
>>> Jono.
>>>
>>>
>>> _______________________________________________
>>> Chat mailing list
>>> Chat at lists.lrug.org
>>> http://lists.lrug.org/listinfo.cgi/chat-lrug.org
>>>
>>>
>>
>> _______________________________________________
>> Chat mailing list
>> Chat at lists.lrug.org
>> http://lists.lrug.org/listinfo.cgi/chat-lrug.org
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lrug.org/pipermail/chat-lrug.org/attachments/20140624/42c49ec9/attachment.html>


More information about the Chat mailing list