A day ago I’ve discovered that triggers, contrary to the BIKI information (now corrected), are global objects, just like any other MP object you create with createVehicle. After a few experiments it became clear that creating a trigger on a client will leave server with a dead trigger object when the client logs out. What’s more, anyone joining the server will get the dead trigger synced for them just as well. This potentially can cause an avalanche of dead triggers on network. So I submitted a bug report, based on my understanding of triggers at the time derived from BIKI info.
This morning Karel replied to the ticket explaining that this is indeed intended behaviour. At first I was like what the…? But after Karel helped me to clear few questions, I have to say it is actually a pretty good design. This is what I’ve learned.
- A trigger should only be created on the server to avoid barrage of dead triggers on client disconnects
- Editor based triggers (mission.sqm Sensors) are indeed created only on the server
- A single server trigger can be used by all clients as well as the server
- You can transfer trigger ownership back and forth between server and clients it does not affect its functionality on other clients
- A trigger will always become server property when current owner logs out
- Trigger params should be set individually for every client if you do it by script, hence you can have the same trigger behave differently on different clients if you wish
I will explain this last point a bit more. When you create a network object you can set local variable in this object’s namespace: obj setVariable ["myvar", someval]. This variable will only exist on the PC you executed the setVariable command unless you choose to make it global by passing 3rd param: obj setVariable ["myvar", someval, true]. Think of a trigger set up as setting up a local variable. A trigger setup you create on one client exists only locally on that client while the trigger itself exists globally on all clients.
So if I create a trigger on the server then setTriggerActivation on one client for “WEST” and “PRESENT” and on another client for “EAST” and “PRESENT”, the trigger instance on the first client will execute on activation statement when BLUFOR unit is in the area but not on the second client, even though it is the same trigger at the same position. So if you for example want to make a trigger to watch player, if you use triggerAttachVehicle [player] on each client, the trigger instances of the same trigger on each client will watch only their own player.
Why this design is good? You can have just 1 trigger associated with 1 vehicle across network on every client and watch every player, while on the server you can make the trigger to watch something else. Compact and efficient. Since triggers are “EmptyDetector” objects you can always check how many triggers you have with allMissionObjects “EmptyDetector” to keep track of them.
Now setting up a trigger with script could be a bit tricky. You have to first make sure it is only created on the server, then you have to make sure trigger exists when you try to set up trigger params on clients. In the following example, which is meant to be used in init.sqf, waitUntil is pretty much a requirement in case the server side init.sqf lags behind for any reason:
I have to also mention that all triggerXXXX commands complementary to setTriggerXXXXX commands will also return results from querying their local trigger setup. I’ve edited BIKI and added notes to respective pages. I also have a feeling that waypoints are subjects to the same rules.
EDIT: Finally! createTrigger command has now been extended (Arma 3 v 1.43) and it is possible to create local triggers https://community.bistudio.com/wiki/createTrigger