In Kubernetes pods, Source Server connects reliably with Receiver Server
# Final Release Note The Source Server connects reliably with the Receiver Server when they are running in Kubernetes pods. Previously it would occasionally hang forever. [#1191] # Description This is an issue that @shabiel noticed while trying to get YottaDB replication working in a Kubernetes setup. About 50% of the time, when replication and the application are started at the same time, the source server and receiver server (running in separate pods) connect and exchange a few handshake messages. But after a few exchanges, they both hang and are each waiting for a message from the other. At the hang point, the source server has sent a history record. Below is its stack-trace when it is hung. ```c (gdb) where #0 repl_recv (sock_fd=5, buff=0x7ffe219a0660 "\001", recv_len=0x7ffe219a05cc, timeout=0, poll_direction=0x7ffe219a05d0) at /tmp/yottadb-src/sr_port/repl_comm.c:376 #1 gtmsource_recv_ctl () at /tmp/yottadb-src/sr_unix/gtmsource_process.c:444 #2 gtmsource_process () at /tmp/yottadb-src/sr_unix/gtmsource_process.c:1623 #3 gtmsource () at /tmp/yottadb-src/sr_unix/gtmsource.c:620 #4 mupip_main (argc=8, argv=0x7ffe219a52c8, envp=0x7ffe219a5310) at /tmp/yottadb-src/sr_unix/mupip_main.c:132 #5 dlopen_libyottadb (argc=8, argv=0x7ffe219a52c8, envp=0x7ffe219a5310, main_func=0x55a498574004 "mupip_main") at /tmp/yottadb-src/sr_unix/dlopen_libyottadb.c:151 #6 main (argc=8, argv=0x7ffe219a52c8, envp=0x7ffe219a5310) at /tmp/yottadb-src/sr_unix/mupip.c:21 ``` This happens only in a Kubernetes setup. We have never seen this otherwise. # Draft Release Note The replication source server connects without any issues with the receiver server in a Kubernetes setup. Previously it could occasionally hang forever waiting for a message from the receiver server that never came.
issue