Concurrency¶
Async kernel handles message requests concurrently. How the request is handled is a function of the channel and message type.
The following RunModes are provided.
- direct: Run the handler directly in the message loop (recommend only for short running code)
- queue: Run in a queue (default for the
shellchannel) - task: Run in a task (default for the
controlchannel) - thread: Run in a worker thread
The kernel decides the run mode dynamically with the method get_run_mode.
from async_kernel import utils
from async_kernel.typing import MsgType
kernel = utils.get_kernel()
kernel.get_run_mode(MsgType.comm_msg)
<RunMode.queue: 'queue'>
Below is a list of the run modes according to the message type and channel (SocketID).
data = kernel.all_concurrency_run_modes()
try:
import pandas as pd
except ImportError:
print(data)
else:
data = pd.DataFrame(data)
data["RunMode"] = data.RunMode.str.replace("##", "")
data = data.pivot(index="MsgType", columns=["SocketID"], values="RunMode") # noqa: PD010
data = data.reindex(["shell", "control"], axis=1)
display(data)
| SocketID | shell | control |
|---|---|---|
| MsgType | ||
| comm_close | direct | direct |
| comm_info_request | direct | direct |
| comm_msg | queue | queue |
| comm_open | direct | direct |
| complete_request | thread | thread |
| create_subshell_request | None | thread |
| debug_request | None | queue |
| delete_subshell_request | None | thread |
| execute_request | queue | queue |
| history_request | thread | thread |
| inspect_request | thread | thread |
| interrupt_request | direct | direct |
| is_complete_request | thread | thread |
| kernel_info_request | direct | direct |
| list_subshell_request | None | direct |
| shutdown_request | None | direct |
Execute request run mode¶
There are a few options to modify how code cells are run.
- Metadata
- Directly in code
- tags
- Message header (in custom messages)
Warning
Only Jupyter lab is known to allow concurrent execution of cells.
Code for example¶
- This example requires ipywidgets
- Ensure you are running an async kernel
Lets define a function that we'll reuse for the remainder of the notebook.
async def demo():
import threading
from aiologic import Event
from ipywidgets import Button
print(f"Thread name: '{threading.current_thread().name}'")
button = Button(description="Finish")
event = Event()
button.on_click(lambda _: event.set())
display(button)
await event
button.close()
print(f"Finished ... thread name: '{threading.current_thread().name}'")
return "Finished"
Lets run it normally (queue)
await demo()
Thread name: 'MainThread'
Button(description='Finish', style=ButtonStyle())
Run mode: task¶
The task mode instructs the kernel to execute the code in a task separate to the queue, Both task and thread execute modes can be started when the kernel is busy executing. There is no imposed limitation on the number of tasks (or threads) that can be run concurrently.
See also the Caller example on how to call directly.
# task
# Tip: try running this cell while the previous cell is still busy.
await demo()
Thread name: 'MainThread'
Button(description='Finish', style=ButtonStyle())
Run mode: thread¶
# This time we'll use the tag to run the cell in a Thread
await demo()
Thread name: 'MainThread'
Button(description='Finish', style=ButtonStyle())
# thread
%callers # magic provided by async kernel
Name Running Protected Thread
──────────────────────────────────────────────────────────────────────────────────────────────────────
Shell ✓ 🔐 <_MainThread(MainThread, started 140483262767232)>
Control ✓ 🔐 <Thread(Control, started daemon 140483191084736)>
✓ <Thread(async_kernel_caller, started daemon 140482285926080)> ← current