1. 30 Aug, 2019 1 commit
  2. 27 Aug, 2019 3 commits
  3. 23 Aug, 2019 1 commit
  4. 22 Aug, 2019 2 commits
    • Easwar Swaminathan's avatar
      Fix a bug in benchmark code. (#2989) · 6fb38bde
      Easwar Swaminathan authored
      Total number of Allocs and AllocedBytes retrieved from
      runtime.Memstats() were not being divided by the number of completed
      operations during the benchmark run, to get the correct number of
      Allocs/op and Bytes/op.
      6fb38bde
    • Easwar Swaminathan's avatar
      Implementation of the xds_experimental resolver. (#2967) · dc187547
      Easwar Swaminathan authored
      This resolver doesn't do much at this point, except returning an empty
      address list and a hard-coded service config which picks the xds
      balancer with a round_robin child policy.
      
      Also moved the xdsConfig struct to the xds/internal package and exported
      it as LBConfig, so that both the resolver and the balancer packages can
      make use of this.
      dc187547
  5. 21 Aug, 2019 1 commit
  6. 20 Aug, 2019 1 commit
  7. 17 Aug, 2019 2 commits
  8. 16 Aug, 2019 1 commit
  9. 14 Aug, 2019 2 commits
  10. 12 Aug, 2019 1 commit
  11. 10 Aug, 2019 2 commits
  12. 08 Aug, 2019 3 commits
  13. 07 Aug, 2019 2 commits
  14. 31 Jul, 2019 1 commit
  15. 30 Jul, 2019 1 commit
  16. 26 Jul, 2019 2 commits
  17. 25 Jul, 2019 4 commits
  18. 23 Jul, 2019 4 commits
    • Doug Fawley's avatar
      client: fix race between transport draining and new RPCs (#2919) · 97714221
      Doug Fawley authored
      Before these fixes, it was possible to see errors on new RPCs after a
      connection began draining, and before establishing a new connection.  There is
      an inherent race between choosing a SubConn and attempting to creating a stream
      on it.  We should be able to avoid application-visible RPC errors due to this
      with transparent retry.  However, several bugs were preventing this from
      working correctly:
      
      1. Non-wait-for-ready RPCs were skipping transparent retry, though the retry
      design calls for retrying them.
      
      2. The transport closed itself (and would consequently error new RPCs) before
      notifying the SubConn that it was draining.
      
      3. The SubConn wasn't synchronously updating itself once it was notified about
      the closing or draining state.
      
      4. The SubConn would go into the TRANSIENT_FAILURE state instantaneously,
      causing RPCs to fail instead of queue.
      97714221
    • Menghan Li's avatar
      grpclb: enable keepalive (#2918) · a975db93
      Menghan Li authored
      So grpclb client will reconnect when the connection is down (e.g. proxy
      drops the server side connection but keeps the client side).
      a975db93
    • David Zbarsky's avatar
    • David Zbarsky's avatar
  19. 19 Jul, 2019 1 commit
  20. 18 Jul, 2019 1 commit
  21. 17 Jul, 2019 1 commit
  22. 13 Jul, 2019 2 commits
  23. 12 Jul, 2019 1 commit