Running server as described in README does not work properly
Using the first setup as described in README
The worker instance dies immediately using docker-compose build && docker-compose up
.
Repro steps:
Build and bring up project
docker-compose build && docker-compose up
Probably most relevant part
Error: Error loading shared library /app/node_modules/ref/build/Release/binding.node: Exec format error
Full output from `docker-compose up`:
lump@lump-at-work powbox-server (master) $ docker-compose up
Creating network "powboxserver_default" with the default driver
Creating powboxserver_rabbitmq_1 ... done
Creating powboxserver_mongodb_1 ... done
Creating powboxserver_api_1 ... done
Creating powboxserver_worker_1 ... done
Attaching to powboxserver_rabbitmq_1, powboxserver_mongodb_1, powboxserver_api_1, powboxserver_worker_1
mongodb_1 | 2018-04-01T05:08:14.200+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=0a9663e20b92
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] db version v3.6.3
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] git version: 9586e557d54ef70f9ca4b43c26892cd55257e1a5
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1t 3 May 2016
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] allocator: tcmalloc
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] modules: none
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] build environment:
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] distmod: debian81
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] distarch: x86_64
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] target_arch: x86_64
api_1 |
api_1 | > sandbox.js@1.0.0 start:api /app
api_1 | > node src/server/index.js
api_1 |
mongodb_1 | 2018-04-01T05:08:14.201+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }
mongodb_1 | 2018-04-01T05:08:14.202+0000 I STORAGE [initandlisten]
mongodb_1 | 2018-04-01T05:08:14.202+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1 | 2018-04-01T05:08:14.202+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1 | 2018-04-01T05:08:14.202+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=487M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1 | 2018-04-01T05:08:14.788+0000 I CONTROL [initandlisten]
mongodb_1 | 2018-04-01T05:08:14.788+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
mongodb_1 | 2018-04-01T05:08:14.788+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
mongodb_1 | 2018-04-01T05:08:14.788+0000 I CONTROL [initandlisten]
mongodb_1 | 2018-04-01T05:08:14.794+0000 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: 5d1a7f61-5580-45b5-88b8-a28575a6fc14
mongodb_1 | 2018-04-01T05:08:14.809+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.6
mongodb_1 | 2018-04-01T05:08:14.815+0000 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 9485b8ba-317e-4b0b-8db6-510793a483fc
mongodb_1 | 2018-04-01T05:08:14.828+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1 | 2018-04-01T05:08:14.831+0000 I NETWORK [initandlisten] waiting for connections on port 27017
worker_1 | /app/node_modules/bindings/bindings.js:83
worker_1 | throw e
worker_1 | ^
worker_1 |
worker_1 | Error: Error loading shared library /app/node_modules/ref/build/Release/binding.node: Exec format error
worker_1 | at Object.Module._extensions..node (module.js:681:18)
worker_1 | at Module.load (module.js:565:32)
worker_1 | at tryModuleLoad (module.js:505:12)
worker_1 | at Function.Module._load (module.js:497:3)
worker_1 | at Module.require (module.js:596:17)
worker_1 | at require (internal/module.js:11:18)
worker_1 | at bindings (/app/node_modules/bindings/bindings.js:76:44)
worker_1 | at Object. (/app/node_modules/ref/lib/ref.js:5:47)
worker_1 | at Module._compile (module.js:652:30)
worker_1 | at Object.Module._extensions..js (module.js:663:10)
worker_1 | at Module.load (module.js:565:32)
worker_1 | at tryModuleLoad (module.js:505:12)
worker_1 | at Function.Module._load (module.js:497:3)
worker_1 | at Module.require (module.js:596:17)
worker_1 | at require (internal/module.js:11:18)
worker_1 | at Object. (/app/node_modules/ffi/lib/ffi.js:6:11)
powboxserver_worker_1 exited with code 1
api_1 | Listening on port 3000...
api_1 | (node:18) UnhandledPromiseRejectionWarning: MongoError: failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
api_1 | at Pool. (/app/node_modules/mongodb-core/lib/topologies/server.js:336:35)
api_1 | at emitOne (events.js:116:13)
api_1 | at Pool.emit (events.js:211:7)
api_1 | at Connection. (/app/node_modules/mongodb-core/lib/connection/pool.js:280:12)
api_1 | at Object.onceWrapper (events.js:317:30)
api_1 | at emitTwo (events.js:126:13)
api_1 | at Connection.emit (events.js:214:7)
api_1 | at Socket. (/app/node_modules/mongodb-core/lib/connection/connection.js:189:49)
api_1 | at Object.onceWrapper (events.js:315:30)
api_1 | at emitOne (events.js:116:13)
api_1 | at Socket.emit (events.js:211:7)
api_1 | at emitErrorNT (internal/streams/destroy.js:64:8)
api_1 | at _combinedTickCallback (internal/process/next_tick.js:138:11)
api_1 | at process._tickCallback (internal/process/next_tick.js:180:9)
api_1 | (node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
api_1 | (node:18) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
api_1 | (node:18) UnhandledPromiseRejectionWarning: Error: connect ECONNREFUSED 172.18.0.2:5672
api_1 | at Object._errnoException (util.js:1022:11)
api_1 | at _exceptionWithHostPort (util.js:1044:20)
api_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1198:14)
api_1 | From previous event:
api_1 | at Object.connect (/app/node_modules/amqplib/channel_api.js:6:18)
api_1 | at Object.module.exports.listen (/app/src/server/rabbit.js:37:29)
api_1 | at Object. (/app/src/server/index.js:80:4)
api_1 | at Module._compile (module.js:652:30)
api_1 | at Object.Module._extensions..js (module.js:663:10)
api_1 | at Module.load (module.js:565:32)
api_1 | at tryModuleLoad (module.js:505:12)
api_1 | at Function.Module._load (module.js:497:3)
api_1 | at Function.Module.runMain (module.js:693:10)
api_1 | at startup (bootstrap_node.js:188:16)
api_1 | at bootstrap_node.js:609:3
api_1 | (node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
api_1 | Unhandled rejection Error: connect ECONNREFUSED 172.18.0.2:5672
api_1 | at Object._errnoException (util.js:1022:11)
api_1 | at _exceptionWithHostPort (util.js:1044:20)
api_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1198:14)
api_1 | From previous event:
api_1 | at Object.connect (/app/node_modules/amqplib/channel_api.js:6:18)
api_1 | at Object.module.exports.middleware (/app/src/server/rabbit.js:22:10)
api_1 | at Object. (/app/src/server/index.js:36:12)
api_1 | at Module._compile (module.js:652:30)
api_1 | at Object.Module._extensions..js (module.js:663:10)
api_1 | at Module.load (module.js:565:32)
api_1 | at tryModuleLoad (module.js:505:12)
api_1 | at Function.Module._load (module.js:497:3)
api_1 | at Function.Module.runMain (module.js:693:10)
api_1 | at startup (bootstrap_node.js:188:16)
api_1 | at bootstrap_node.js:609:3
rabbitmq_1 | 2018-04-01 05:08:19.939 [info] <0.33.0> Application lager started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.486 [info] <0.33.0> Application mnesia started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.486 [info] <0.33.0> Application recon started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.493 [info] <0.33.0> Application os_mon started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.580 [info] <0.33.0> Application inets started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.580 [info] <0.33.0> Application asn1 started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.580 [info] <0.33.0> Application crypto started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.580 [info] <0.33.0> Application public_key started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.692 [info] <0.33.0> Application ssl started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.699 [info] <0.33.0> Application ranch started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.700 [info] <0.33.0> Application ranch_proxy_protocol started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.700 [info] <0.33.0> Application jsx started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.700 [info] <0.33.0> Application xmerl started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.700 [info] <0.33.0> Application rabbit_common started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:20.719 [info] <0.183.0>
rabbitmq_1 | Starting RabbitMQ 3.7.4 on Erlang 20.2.4
rabbitmq_1 | Copyright (C) 2007-2018 Pivotal Software, Inc.
rabbitmq_1 | Licensed under the MPL. See http://www.rabbitmq.com/
rabbitmq_1 |
rabbitmq_1 | ## ##
rabbitmq_1 | ## ## RabbitMQ 3.7.4. Copyright (C) 2007-2018 Pivotal Software, Inc.
rabbitmq_1 | ########## Licensed under the MPL. See http://www.rabbitmq.com/
rabbitmq_1 | ###### ##
rabbitmq_1 | ########## Logs:
rabbitmq_1 |
rabbitmq_1 | Starting broker...
rabbitmq_1 | 2018-04-01 05:08:20.739 [info] <0.183.0>
rabbitmq_1 | node : rabbit@3c0f8fff611d
rabbitmq_1 | home dir : /var/lib/rabbitmq
rabbitmq_1 | config file(s) : /etc/rabbitmq/rabbitmq.conf
rabbitmq_1 | cookie hash : kCCGxiBbZMk68TcM+eAaaQ==
rabbitmq_1 | log(s) :
rabbitmq_1 | database dir : /var/lib/rabbitmq/mnesia/rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:23.149 [info] <0.191.0> Memory high watermark set to 799 MiB (838308659 bytes) of 1998 MiB (2095771648 bytes) total
rabbitmq_1 | 2018-04-01 05:08:23.157 [info] <0.193.0> Enabling free disk space monitoring
rabbitmq_1 | 2018-04-01 05:08:23.157 [info] <0.193.0> Disk free limit set to 50MB
rabbitmq_1 | 2018-04-01 05:08:23.161 [info] <0.195.0> Limiting to approx 1048476 file handles (943626 sockets)
rabbitmq_1 | 2018-04-01 05:08:23.161 [info] <0.196.0> FHC read buffering: OFF
rabbitmq_1 | 2018-04-01 05:08:23.161 [info] <0.196.0> FHC write buffering: ON
rabbitmq_1 | 2018-04-01 05:08:23.163 [info] <0.183.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@3c0f8fff611d is empty. Assuming we need to join an existing cluster or initialise from scratch...
rabbitmq_1 | 2018-04-01 05:08:23.163 [info] <0.183.0> Configured peer discovery backend: rabbit_peer_discovery_classic_config
rabbitmq_1 | 2018-04-01 05:08:23.163 [info] <0.183.0> Will try to lock with peer discovery backend rabbit_peer_discovery_classic_config
rabbitmq_1 | 2018-04-01 05:08:23.163 [info] <0.183.0> Peer discovery backend does not support locking, falling back to randomized delay
rabbitmq_1 | 2018-04-01 05:08:23.163 [info] <0.183.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping randomized startup delay.
rabbitmq_1 | 2018-04-01 05:08:23.163 [info] <0.183.0> All discovered existing cluster peers:
rabbitmq_1 | 2018-04-01 05:08:23.163 [info] <0.183.0> Discovered no peer nodes to cluster with
rabbitmq_1 | 2018-04-01 05:08:23.165 [info] <0.33.0> Application mnesia exited with reason: stopped
rabbitmq_1 | 2018-04-01 05:08:23.179 [info] <0.33.0> Application mnesia started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:23.243 [info] <0.183.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq_1 | 2018-04-01 05:08:23.257 [info] <0.183.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq_1 | 2018-04-01 05:08:23.272 [info] <0.183.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq_1 | 2018-04-01 05:08:23.272 [info] <0.183.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
rabbitmq_1 | 2018-04-01 05:08:23.273 [info] <0.183.0> Priority queues enabled, real BQ is rabbit_variable_queue
rabbitmq_1 | 2018-04-01 05:08:23.278 [info] <0.366.0> Starting rabbit_node_monitor
rabbitmq_1 | 2018-04-01 05:08:23.291 [info] <0.183.0> message_store upgrades: 1 to apply
rabbitmq_1 | 2018-04-01 05:08:23.291 [info] <0.183.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store
rabbitmq_1 | 2018-04-01 05:08:23.291 [info] <0.183.0> message_store upgrades: No durable queues found. Skipping message store migration
rabbitmq_1 | 2018-04-01 05:08:23.292 [info] <0.183.0> message_store upgrades: Removing the old message store data
rabbitmq_1 | 2018-04-01 05:08:23.293 [info] <0.183.0> message_store upgrades: All upgrades applied successfully
rabbitmq_1 | 2018-04-01 05:08:23.307 [info] <0.183.0> Adding vhost '/'
rabbitmq_1 | 2018-04-01 05:08:23.321 [info] <0.400.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@3c0f8fff611d/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
rabbitmq_1 | 2018-04-01 05:08:23.324 [info] <0.400.0> Starting message stores for vhost '/'
rabbitmq_1 | 2018-04-01 05:08:23.324 [info] <0.404.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
rabbitmq_1 | 2018-04-01 05:08:23.326 [info] <0.400.0> Started message store of type transient for vhost '/'
rabbitmq_1 | 2018-04-01 05:08:23.326 [info] <0.407.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
rabbitmq_1 | 2018-04-01 05:08:23.327 [warning] <0.407.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch
rabbitmq_1 | 2018-04-01 05:08:23.328 [info] <0.400.0> Started message store of type persistent for vhost '/'
rabbitmq_1 | 2018-04-01 05:08:23.330 [info] <0.183.0> Creating user 'guest'
rabbitmq_1 | 2018-04-01 05:08:23.332 [info] <0.183.0> Setting user tags for user 'guest' to [administrator]
rabbitmq_1 | 2018-04-01 05:08:23.334 [info] <0.183.0> Setting permissions for 'guest' in '/' to '.*', '.*', '.*'
rabbitmq_1 | 2018-04-01 05:08:23.338 [info] <0.445.0> started TCP Listener on [::]:5672
rabbitmq_1 | 2018-04-01 05:08:23.342 [info] <0.183.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:23.345 [info] <0.183.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:23.346 [info] <0.33.0> Application rabbit started on node rabbit@3c0f8fff611d
rabbitmq_1 | 2018-04-01 05:08:23.728 [info] <0.5.0> Server startup complete; 0 plugins started.
rabbitmq_1 | completed with 0 plugins.
Using the second setup as described in README
Works "better" - as worker and api can both be started - but fails when processing transaction requests.
#### Repro steps:
Bring up rabbitmq and mongodb services:
docker-compose run -d -p 5672:5672 rabbitmq
docker-compose run -d -p 27017:27017 mongodb
Start api
npm run start:api
Start worker
npm run start:worker
Execute client code that tries to send a transaction
const IOTA = require('iota.lib.js');
const remoteCurl = require('@iota/curl-remote');
const providerUrl = 'https://nodes.testnet.iota.org:443';
const powboxUrl = 'http://localhost:3000';
const iotaInst = new IOTA({
provider: providerUrl,
minWeightMagnitude: 9
});
remoteCurl(iotaInst, powboxUrl);
const seed = 'HQVYRUFMJSZLIPGURKYLJJDQKDHQHUWFPONPGTTNXLCAOYYBH9SAXMROYVLRNOPBQAJ9GF9IEEFPPBYBJ';
const targetAddress = 'HTBAD9NDYVTLFGEGJE99SUNJGFFJ9BQ9SQAYVXYUDTJOXLVEFXIIQDLHDQEXHHRD9JZSLYSTQGZUZGIKXYGRRDJRGC';
const transfer = [{
value: 0,
address: targetAddress,
}];
console.log('sending transfer', transfer);
iotaInst.api.sendTransfer(
seed,
4,
9,
transfer,
{},
(error, bundle) => {
if (error) {
console.error(error);
return false;
}
console.log(bundle);
return false;
},
);
Results in the error:
error: Dynamic Linking Error: dlopen(/opt/libccurl.dylib, 2): image not found
Full worker output from start to finish:
{"queue":"attach-request","name":"listening-for-mesages","message":"Timeout set to 300 seconds"}
{"queue":"attach-request","name":"job-received-ok","jobId":"5ac0670daf9fbc31ce0387d7"}
{"queue":"attach-request","name":"job-complete-error","jobId":"5ac0670daf9fbc31ce0387d7","message":"Dynamic Linking Error: dlopen(/opt/libccurl.dylib, 2): image not found","stack":"Error: Dynamic Linking Error: dlopen(/opt/libccurl.dylib, 2): image not found\n at new DynamicLibrary (/Users/lump/work/code/powbox-server/node_modules/ffi/lib/dynamic_library.js:74:11)\n at Object.Library (/Users/lump/work/code/powbox-server/node_modules/ffi/lib/library.js:45:12)\n at module.exports (/Users/lump/work/code/powbox-server/node_modules/ccurl.interface.js/index.js:83:26)\n at channel.consume.msg (/Users/lump/work/code/powbox-server/src/worker/index.js:166:25)\n at ConfirmChannel.BaseChannel.dispatchMessage (/Users/lump/work/code/powbox-server/node_modules/amqplib/lib/channel.js:466:12)\n at ConfirmChannel.BaseChannel.handleDelivery (/Users/lump/work/code/powbox-server/node_modules/amqplib/lib/channel.js:475:15)\n at emitOne (events.js:116:13)\n at ConfirmChannel.emit (events.js:211:7)\n at /Users/lump/work/code/powbox-server/node_modules/amqplib/lib/channel.js:263:10\n at ConfirmChannel.content (/Users/lump/work/code/powbox-server/node_modules/amqplib/lib/channel.js:316:9)\n at ConfirmChannel.C.acceptMessageFrame (/Users/lump/work/code/powbox-server/node_modules/amqplib/lib/channel.js:231:31)\n at ConfirmChannel.C.accept (/Users/lump/work/code/powbox-server/node_modules/amqplib/lib/channel.js:384:17)\n at Connection.mainAccept [as accept] (/Users/lump/work/code/powbox-server/node_modules/amqplib/lib/connection.js:64:33)\n at Socket.go (/Users/lump/work/code/powbox-server/node_modules/amqplib/lib/connection.js:477:48)\n at emitNone (events.js:106:13)\n at Socket.emit (events.js:208:7)"}
OS: OSX 10.13.3 - 17D47
Edited by laubsauger