Unfortunately, I had made the incorrect assumption that all Google Compute Engine networks had no firewall between instances. It turns out not to be the case and requires an explicit rules. Only the “default” network has this rule by default.
After adding this rule to my “staging” network, I was able to connect on 1113. I had some specific rules for the gateway instance that was allowing the connection on 2113 but not 1113.
To be honest we haven’t got that far yet. We are using Google Compute Engine rather than App Engine or Container Engine and only using regular instances. If needed we can beef up the RAM and processor on the instances to improve performance. We are only a startup so performance issues are something we could be so luck to have
Seems reasonable - I’d be interested to hear how you get on.
BTW you get nothing from running Event Store in Docker. I’d strongly recommend removing that bit from your setup and running it directly on the instances.
James, is running event store in docker not a good thing considering prod == staging == test with respect to setup? I know what you mean in a sense, but having repeatability and reproducibility across diff environments is great, IMHO.
Hehe, I guess you are not that big of a fan of coreos + docker/rkt for that I think it is great being able to be on an updated os like coreos all the time wrt security fixes, without being involved event store is kind of orthogonal to that stuff, I know, but you have to admit that VM’s are on their way out and “containers” on their way in, and thus you guys focusing on a great container story
We have been discussing this internally to the point of providing a
supported container (with a whole lot of weasel words around it for
performance etc). In general I have no issues with containers where
you are not concerned with throughput/latency (e.g. most business
systems) the place where its a mess is when you need performance.
Containers are one of the things that falls under “just because you can, doesn’t mean you should”. Hipsters using containers everywhere does not mean we’ll be focusing on it until there is a proven benefit to running as an operational system.
The only ones which actually provide the claimed benefits are Solaris containers, but we don’t test on Solaris at all. Can you explain what actual benefit (that wouldn’t be solved by operating system packages) are provided by containers for running a database? Or is it a case of “works on my machine, so we’ll just ship my machine”? How do you manage storage in a sensible manner?
Not sure about that, just stating the fact that docker and rkt are a thing happening right now, not a fad IMHO, and you guys do yourselves a favour being container friendly as possible. Manage storage in a sensible way? Just like on a VM, but with volume mapping?
Bless Solaris, and may it rest in its grave Yes, you can align dev with staging and dev, that is a big thing from a developer perspective. Yes, Greg, I know that developer != ops.