2026/03/26
Jump to navigation
Jump to search
|
Thursday, March 26, 2026 (#85)
|
|
References |
- 09:28 My first task for today would appear to be: figure out where the Updater is being set to use Stream id=3, and try changing that to id=1.
- 09:41 It already is being set to that, in [Client]
::NewSchemaList():UpdateClass::FromEngine($this->OEngine(),$oCSListen);where$oCSListenis in fact id=1- ...so it's being overridden somewhere. Need to find that.
- 09:44 It's just a few lines later:
- ...but I can't see any reason for those last 2 lines to exist, so I'm commenting them out. Where does that leave us, then...
# if ($qoRun->HasIt()) { $oRSLecture = $qoRun->GetIt()->OLecture(); UpdateClass::FromEngine($this->OEngine(),$oRSLecture);
- 09:47 unfortunately, it leaves us without any on-screen updates. HMM.
- 10:04 The Stream objec in question is not somehow losing its Updater object. So why isn't the Updater being used...
- 10:07 It's not getting to
InOutLoop()--so there has to be some other loop before that where it's getting stuck.- I vaguely remember finding this earlier, but can't remember details.
- 10:13 I was wrongish. It gets to
InOutLoop(), but then gets stuck inside another loop that's called from within that one:$oOpNew = $oCanals->ConveyCheck();- ...which then gets stuck on a Canal which is conveying id4 to... id1? Is that right? ...Yes, apparently another object got into the mix, so now the stream which was id3 is now id4.
- Since renumbering has already happened, this seems like a good moment to try my easily-implemented idea for {possibly making the IDs a little more stable maybe}.
- 10:26 (I was kinda expecting that the IDs I'm looking at would now start with higher numbers, but this hasn't been the case -- so my new-and-entirely-optional code isn't getting called, oh well.) So anyway, here's the new dramatis personae enumeration:
- STREAMS:
- 1
[WFe]\IO\Aspect\Connx\Stream\Finite\cBuffer"buffer stream" - DEST (in ConveyNow()), LISTEN (in Client), Cmd listener; UPDATER found consistently- ⇐ (id4) process stream: stdout
- 4
[WFe]\IO\Aspect\Connx\Stream\Native\cExec"process stream: stdout" - SRCE (in ConveyNow())- ⇒ (id1) buffer stream
- 6 "process stream: stderr"
- ⇒ id7 screen
- 7 "screen"
- ⇐ id6 process stream: stderr
- 1
- OTHER OBJECTS:
- 0:
[WFe]\IO\Aspect\Connx\Runner\Local\cProc("The Loop Abides") - 2:
[WFe]\Sys\Data\Engine\endpt\Client\MyMar\cMaria - 3:
[WFe]\IO\Aspect\Connx\Stream\Native\cExec: confusingly tagged in debug output as "<- LISTEN OBJECT" -- check on this - 5:
[WFe]\IO\Aspect\Connx\cConveyerCanal for [(id4) process stream: stdout] => [[id1 buffer stream] bytes: 0 pulled, 0 pushed; length=0] - 8:
[WFe]\IO\Aspect\Connx\cConveyerCanal for [(id6) process stream: stderr] => [[id7 screen] bytes: 0 pulled, 0 pushed; length=0] - 9:
[WFe]\IO\Aspect\Connx\aux\A\cCanals(goes through all the Conveyers aka Canals)
- 0:
- STREAMS:
- 11:09 Back to the current question: why is it not updating the screen while getting stuck conveying id4 ⇒ id1 (and why is it getting stuck)?
- 11:15 Aha:
ConveyNow()assumes that we only want to look for an Updater on the$oSrceobject. (It is in fact now on the$oDestobject, where it needs to be in order to properly assess the incoming data.) Do we... (a) have it check both of them, or (b) switch it to always look at$oDest, or (c) somehow explicitly pass it theUpdaterwe want to use? I'm thinking "(b)" is easiest, IF it doesn't cause problems in other situations... - 11:43 The
Conveyerclass does in fact have aQOUpdater()method, and I think I remember thatCanalsalso has one which passes the Updater down to the individual Conveyer Canals, so in theory we could implement "(c)" fairly easily. Checking on that...- Not quite.
Conveyerhas it becauseConnxrequires and implements it, but apparentlyCanals(which derives from an array-class, notConnx) does not.- I could: (c1) add it to
Canals, or (c2) see if there's some way to pass it directly to theConveyerwhose activities we want to watch. Investigating... - The problem with "(c1)" is that I don't think we want all the
Conveyerobjects to have the sameUpdater, even if we did want them all to have one. (I can see this happening if we're just using one of theUpdaters for the readout and the others are just for detecting EOF or some other data-dependent condition.) - The problem with "(c2)" is, basically, how? The
Conveyerobjects are all created and provisioned within theRunner. We can access that after the initialDoCommand(via [Runner]::OCanals(), but theCanalslist-object does not have methods for addressing the individualCanalobjects in an identity-reliable way. - I'm now leaning towards "(a)". [slowly tilts over and falls into "[a]"] Yep, that's done it. Gonna try to implement that one. Maybe just call any
Updaters that are set (QUpdatermakes this super-easy, if I'm not mistaken.)
- I could: (c1) add it to
- Not quite.
- 12:05 Got on-screen updates again! ...and now we are back to "why is it getting stuck" -- or, more precisely, "what to I need to do in
CheckStream()to reliably and accurately detect when we've got the necessary data?"- First task: need to be able to see the accumulated data.
Bufferdid not yet have a way to do this, so I addedSContents(). - Second task: take a look at the contents. When it's not empty, display it with markers so I can see if there's anything useful at the end.
- First task: need to be able to see the accumulated data.
- 12:17 Problem: not getting any data. I feel like this has happened before and I fixed it, but maybe only the first part of that. Feels like a nap is needed.
- 15:00 I need to be sure of what's actually being received by the executable, and (if that looks right) what it's actually sending. The ironclad way to do that is to write a dedicated executable which logs everything, but I'd also have to set up some infra on this end to talk to it instead of the DB engine (mysql/mariadb)... unless there's some way to tell mymar to log everything it receives...
- 15:19 Copilot says (and I have confirmed) that there is a standard Linux command which will log the entirety of any interactive session. It's just
script [ <logfile name> ]to start the logging, andexitto stop it. Adding that should be close enough to trivial, especially if I can send both commands at the same time. Investigating... - 15:33 The commands I'm supposedly running inside the script are running, but nothing shows up in the log file.
- ...huh, it's running them in the wrong order? (mymar first, then script) WTF.
- Oh duhh, I'm invoking
scriptin the wrong place. - Still zero bytes... oh, right, I have to be sure to keep the process-session open. Tricky...
- 16:02 Running them in the right order, from the Command Queue, but still zero bytes. (I guess I need to make sure it's opening the process before it starts; otherwise the process will open and shut for each line.)
- 17:40 Trying to add another layer to the process is revealing the flaws in my process-management model. I'm now working out how to resolve that.
- 19:43 Part of the problem here is phylogeny -- how the current model evolved. It comes from needing to wrap both SSH2 processes and local execution (
proc_*()functions, though in practice justproc_open()and hopefullyproc_close()is also called when it should be, to tidy things up), which themselves model the underlying process differently...
Process Control Modeling
2026/03/27: I'm now adapting this section to use as the process-management subsystem documentation.
Two different approaches:
- In SSH2, you first open the session (
ssh2_connect()) to get a session-resource, and then you send commands (ssh2_exec()) via that resource.- You can also open a shell (
ssh2_shell()) for further commands, though it's unclear what the difference is.
- You can also open a shell (
- In proc, a process starts with a command. The command can then be left running to receive additional data, which will be interpreted by the initial command.
I'm still trying to figure out a model-wrapper that will correctly handle both of these. The required pieces seem to be:
- (hook) Open the object
- (object) Initial command
- (object) Lecture-stream (for the process to send data)
- (object) Listen-stream (for the process to receive data)
- (hook) Shut the object
The most awkward fit is the "initial command". On ssh2, every command sent via an open connection is an "initial" command.
Maybe the trick here is... to think in terms of different levels of nesting. My first attempt to map the model to the fx calls:
| Model Piece | Action: Proc | Action: SSH2 |
|---|---|---|
| Open | NO OP | ssh2_connect()
|
| Command | proc_open() |
ssh2_exec()
|
| Shut | NO OP | ssh2_disconnect()
|
After thinking about that for a bit, I think maybe what I need is actually two models/clades:
- Process runner: handles creation of process session objects
- Process session: handles individual processes, including termination and (where available) status
So: in each case (Proc and SSH2):
- create/provision Runner object
- open it (which will be a NO OP for Proc)
- use it to run commands, each of which creates a Session
- each Session has status info and can be closed