#
2078:0996dd223cdd |
| 18-Dec-2021 |
Alejandro Colomar |
Fixed indentation.
Some lines (incorrectly) had an indentation of 3 or 5, or 7 or 9, or 11 or 13, or 15 or 17 spaces instead of 4, 8, 12, or 16. Fix them.
Found with:
$ find src -type f | xargs g
Fixed indentation.
Some lines (incorrectly) had an indentation of 3 or 5, or 7 or 9, or 11 or 13, or 15 or 17 spaces instead of 4, 8, 12, or 16. Fix them.
Found with:
$ find src -type f | xargs grep -n '^ [^ ]'; $ find src -type f | xargs grep -n '^ [^ *]'; $ find src -type f | xargs grep -n '^ [^ ]'; $ find src -type f | xargs grep -n '^ [^ *]'; $ find src -type f | xargs grep -n '^ [^ +]'; $ find src -type f | xargs grep -n '^ [^ *+]'; $ find src -type f | xargs grep -n '^ [^ +]'; $ find src -type f | xargs grep -n '^ [^ *+]';
show more ...
|
Revision tags: 1.26.1-1, 1.26.1 |
|
#
2024:cb12b9fde0ae |
| 01-Dec-2021 |
Max Romanov |
Fixing uninitialized structure field.
Port's "data" field may be used by application and thus need to be set to NULL. The issue was introduced in the f8a0992944df commit.
Found by Coverity (CID 374
Fixing uninitialized structure field.
Port's "data" field may be used by application and thus need to be set to NULL. The issue was introduced in the f8a0992944df commit.
Found by Coverity (CID 374352).
show more ...
|
#
2014:f8a0992944df |
| 24-Nov-2021 |
Max Romanov |
Sending shared port to application prototype.
Application process started with shared port (and queue) already configured. But still waits for PORT_ACK message from router to start request processin
Sending shared port to application prototype.
Application process started with shared port (and queue) already configured. But still waits for PORT_ACK message from router to start request processing (so-called "ready state").
Waiting for router confirmation is necessary. Otherwise, the application may produce response and send it to router before the router have the information about the application process. This is a subject of further optimizations.
show more ...
|
#
2013:797e9f33226d |
| 23-Nov-2021 |
Valentin Bartenev |
Fixed possible access to an uninitialized field.
The "recv_msg.incoming_buf" is checked after jumping to the "done" label if nxt_socket_msg_oob_get_fds() returns an error.
Also moved initialization
Fixed possible access to an uninitialized field.
The "recv_msg.incoming_buf" is checked after jumping to the "done" label if nxt_socket_msg_oob_get_fds() returns an error.
Also moved initialization of "port_msg" near to its first usage.
Found by Coverity (CID 373899).
show more ...
|
Revision tags: 1.26.0-1, 1.26.0 |
|
#
1998:c8790d2a89bb |
| 09-Nov-2021 |
Tiago Natel de Moura |
Introducing application prototype processes.
|
#
1996:35873fa78fed |
| 09-Nov-2021 |
Tiago Natel de Moura |
Introduced SCM_CREDENTIALS / SCM_CREDS in the socket control msgs.
|
#
1980:43553aa72111 |
| 28-Oct-2021 |
Max Romanov |
Moving request limit control to libunit.
Introducting application graceful stop. For now only used when application process reach request limit value.
This closes #585 issue on GitHub.
|
Revision tags: 1.25.0-1, 1.25.0, 1.24.0-1, 1.24.0, 1.23.0-1, 1.23.0 |
|
#
1810:9fcc8edf2201 |
| 02-Mar-2021 |
Max Romanov |
Fixing warnings on Solaris.
pthread_t on Solaris is an integer type with size not equal to pointer size. To avoid warnings, type casts to and from pointer needs to be done via uintptr_t type.
This
Fixing warnings on Solaris.
pthread_t on Solaris is an integer type with size not equal to pointer size. To avoid warnings, type casts to and from pointer needs to be done via uintptr_t type.
This change originally proposed by Juraj Lutter <juraj@lutter.sk>.
show more ...
|
Revision tags: 1.22.0-1, 1.22.0 |
|
#
1767:582a004c73f8 |
| 29-Dec-2020 |
Max Romanov |
Libunit: processing single port message.
This partially reverts the optimisation introduced in 1d84b9e4b459 to avoid an unpredictable block in nxt_unit_process_port_msg(). Under high load, this fun
Libunit: processing single port message.
This partially reverts the optimisation introduced in 1d84b9e4b459 to avoid an unpredictable block in nxt_unit_process_port_msg(). Under high load, this function may never return control to its caller, and the external event loop (in Node.js and Python asyncio) won't be able to process other scheduled events.
To reproduce the issue, two request processing types are needed: 'fast' and 'furious'. The 'fast' one simply returns a small response, while the 'furious' schedules asynchronous calls to external resources. Thus, if Unit is subjected to a large amount of 'fast' requests, the 'furious' request processing freezes until the high load ends.
The issue was found by Wu Jian Ping (@wujjpp) during Node.js stream implementation discussion and relates to PR #502 on GitHub.
show more ...
|
#
1756:72e75ce3c99f |
| 17-Dec-2020 |
Max Romanov |
Libunit: fixed shared memory waiting.
The nxt_unit_ctx_port_recv() function may return the NXT_UNIT_AGAIN code, in which case an attempt to reread the message should be made.
The issue was reproduc
Libunit: fixed shared memory waiting.
The nxt_unit_ctx_port_recv() function may return the NXT_UNIT_AGAIN code, in which case an attempt to reread the message should be made.
The issue was reproduced in load testing with response sizes 16k and up. In the rare case of a NXT_UNIT_AGAIN result, a buffer of size -1 was processed, which triggered a 'message too small' alert; after that, the app process was terminated.
show more ...
|
#
1755:3b0331284155 |
| 17-Dec-2020 |
Max Romanov |
Limiting app queue notifications count in socket.
Under high load, a queue synchonization issue may occur, starting from the steady state when an app queue message is dequeued immediately after it h
Limiting app queue notifications count in socket.
Under high load, a queue synchonization issue may occur, starting from the steady state when an app queue message is dequeued immediately after it has been enqueued. In this state, the router always puts the first message in the queue and is forced to notify the app about a new message in an empty queue using a socket pair. On the other hand, the application dequeues and processes the message without reading the notification from the socket, so the socket buffer overflows with notifications.
The issue was reproduced during Unit load tests. After a socket buffer overflow, the router is unable to notify the app about a new first message. When another message is enqueued, a notification is not required, so the queue grows without being read by the app. As a result, request processing stops.
This patch changes the notification algorithm by counting the notifications in the pipe instead of getting the number of messages in the queue.
show more ...
|
#
1728:b39918d13444 |
| 24-Nov-2020 |
Valentin Bartenev |
Libunit: improved error logging around initialization env variable.
|
Revision tags: 1.21.0-1, 1.21.0 |
|
#
1720:7a07649b389c |
| 19-Nov-2020 |
Max Romanov |
Libunit: fixing read buffer leakage.
If shared queue is empty, allocated read buffer should be explicitly released.
Found by Coverity (CID 363943). The issue was introduced in f5ba5973a0a3.
|
#
1716:825d30598a97 |
| 18-Nov-2020 |
Max Romanov |
Libunit: fixing read buffer allocations on exit.
|
#
1715:95874fd97501 |
| 18-Nov-2020 |
Max Romanov |
Libunit: closing active requests on quit.
|
#
1714:8e02af45485f |
| 18-Nov-2020 |
Max Romanov |
Libunit: making minor tweaks.
Removing unnecessary context operations from shared queue processing loop. Initializing temporary queues only when required.
|
#
1713:f5ba5973a0a3 |
| 18-Nov-2020 |
Max Romanov |
Go: removing C proxy functions and re-using goroutines.
|
#
1712:bbd7893e9ce1 |
| 18-Nov-2020 |
Max Romanov |
Libunit: fixing racing condition in request struct recycling.
The issue occurred under highly concurrent request load in Go applications. Such applications are multi-threaded but use a single libuni
Libunit: fixing racing condition in request struct recycling.
The issue occurred under highly concurrent request load in Go applications. Such applications are multi-threaded but use a single libunit context; any thread-safe code in the libunit context is only required for Go applications.
As a result of improper request state reset, the recycled request structure was recovered in the released state, so further operations with this request resulted in 'response already sent' warnings. However, the actual response was never delivered to the router and the client.
show more ...
|
#
1711:5c082bad8457 |
| 18-Nov-2020 |
Max Romanov |
Libunit: fixing racing condition for port add / state change.
The issue only occurred in Go applications because "port_send" is overloaded only in Go. To reproduce it, send multiple concurrent requ
Libunit: fixing racing condition for port add / state change.
The issue only occurred in Go applications because "port_send" is overloaded only in Go. To reproduce it, send multiple concurrent requests to the application after it has initialised. The warning message "[unit] [go] port NNN:dd not found" is the first visible aspect of the issue; the second and more valuable one is a closed connection, an error response, or a hanging response to some requests.
When the application starts, it is unaware of the router's worker thread ports, so it requests the ports from the router after receiving requests from the corresponding router worker threads. When multiple requests are processed simultaneously, the router port may be required by several requests, so request processing starts only after the application receives the required port information. The port should be added to the Go port repository after its 'ready' flag is updated. Otherwise, Unit may start processing some requests and use the port before it is in the repository.
The issue was introduced in changeset 78836321a126.
show more ...
|
#
1710:e598cd15bd91 |
| 18-Nov-2020 |
Max Romanov |
Libunit: improving logging consistency.
Debug logging depends on macros defined in nxt_auto_config.h.
|
#
1698:bf14c6f5f97b |
| 10-Nov-2020 |
Max Romanov |
Fixing multi-buffer body send to application.
Application shared queue only capable to pass one shared memory buffer. The rest buffers in chain needs to be send directly to application in response t
Fixing multi-buffer body send to application.
Application shared queue only capable to pass one shared memory buffer. The rest buffers in chain needs to be send directly to application in response to REQ_HEADERS_AC message.
The issue can be reproduced for configurations where 'body_buffer_size' is greater than memory segment size (10 Mb). Requests with body size greater than 10 Mb are just `stuck` e.g. not passed to application awaiting for more data from router.
The bug was introduced in 1d84b9e4b459 (v1.19.0).
show more ...
|
#
1668:03fa2be97871 |
| 27-Oct-2020 |
Max Romanov |
Preserving the app port write socket.
The socket is required for intercontextual communication in multithreaded apps.
|
#
1667:32b9bb5dbcbe |
| 27-Oct-2020 |
Max Romanov |
Libunit: waking another context with the RPC_READY message.
|
#
1666:c224d375d89b |
| 27-Oct-2020 |
Max Romanov |
Router: introducing the PORT_ACK message.
The PORT_ACK message is the router's response to the application's NEW_PORT message. After receiving PORT_ACK, the application is safe to process requests
Router: introducing the PORT_ACK message.
The PORT_ACK message is the router's response to the application's NEW_PORT message. After receiving PORT_ACK, the application is safe to process requests using this port.
This message avoids a racing condition when the application starts processing a request from the shared queue and sends REQ_HEADERS_ACK. The REQ_HEADERS_ACK message contains the application port ID as reply_port, which the router uses to send request data. When the application creates a new port, it immediately sends it to the main router thread. Because the request is processed outside the main thread, a racing condition can occur between the receipt of the new port in the main thread and the receipt of REQ_HEADERS_ACK in the worker router thread where the same port is specified as reply_port.
show more ...
|
#
1665:0435cfd79a54 |
| 27-Oct-2020 |
Max Romanov |
Libunit: releasing cached read buffers when destroying context.
|