Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Here we describe scenery, an extensible framework for scientific visualisation of mesh and volumetric data that supports VR and AR. scenery supports a lot of input modalities -- apart from mouse and keyboard -- such as VRPN controllers, OpenVR/SteamVR controllers, gamepads, and eye trackers.
scenery runs on top of the Java VM and can be used on Windows, macOS and Linux. In case you dislike Java, fret not, for scenery is written in Kotlin, a new language that got rid of a lot of the boilerplate needed in Java, and provides a much nicer developer experience.
scenery is a framework and therefore intended as the basis for other visualisation tools. One such tool is its sister project sciview, a plugin for the popular Fiji image processing application, that makes all of scenery's features available via a user-friendly interface.
If you are a developer and want to develop your own scenery-based application, you've come to the right place! If you are already familiar with sciview, and want to extend its capabilities, you are right here as well! And in case you are an end-user looking for a visualisation tool you can use without any knowledge of programming, head over to the sciview documentation, or the sciview repository for information how to get started with sciview in Fiji.
For developing scenery, it's useful to know the basics of Kotlin. A great starting point for learning Kotlin are the Kotlin Koans, a set of small tutorials to get familiar with the language.
To get started with developing with scenery, head over to the Getting Started page, or if you're all set up already, start with Rendering Meshes.
That's a tough question but thankfully, our team is on it. Please bear with us while we're investigating.
Yes, after a few months we finally found the answer. Sadly, Mike is on vacations right now so I'm afraid we are not able to provide the answer at this point.
You want to check off the box on bounding boxes, so let's cut the bad puns and dive right into it. To understand bounding boxes, consider this stock photo of the sherlock holmes among penguins and its bounding box:
In short, a bounding box is the smallest box that contains every feature of an object. This makes them a powerful tool to realize intersections.
There are many ways to define such a bounding box mathematically, in scenery, we do it via a min vector and a max vector:
Here is a visual explanation of what this looks like:
Note that both of these vectors are local coordinates so that the bounding box remains flexible when translating or rescaling a node.
Accessing the bounding box of a node is rather easy:
In case you are dealing with a more sphere-like object, e.g. a biological cell, you should consider using a bounding sphere instead. Simply use the function:
User inputs are handled by the InputHandler,
which is part of SceneryBase
and Hubable
. Adding inputs can be done in an overwrite of the inputSetup
method of SceneryBase
. First a Behavior
needs to be added before a key can be assigned.
A behavior defines an action which is executed when the corresponding keys are pressed. There are currently the following base behaviors from which can be inherited: ClickBehavior
, DragBehavior
, ScrollBehavior
.
In most cases a ClickBehavior
is used. Example:
In cases where the x
and y
screen positions of the cursor are not needed, they are just ignored. Adding a behavior to the InputHandler
is straightforward:
inputHandler
should be available from an overwrite of the inputSetup
method.
Assigning keys to a behavior works in a similar fashion:
This code assigned the earlier added behavior to the first mouse button.
Does the same but for the "M" key.
For more info on the available keys and combinations thereof see InputTrigger syntax
scenery has been tested with a number of different systems and GPUs. If you have a setup that is not listed in the following table - or marked as untested - please submit a PR to this documentation with the setup added. Please note that the OpenGL-based renderer was recently deprecated, so the following table only shows compatibility with the Vulkan renderer.
✅ Works ⛔ Does not work ⬜ Untested 🚫 Unsupported configuration (e.g. no driver support)
AMD Radeon HD 7850 (Pitcairn XT)
✅
⬜
⬜
AMD Radeon R5 M230 (Caicos Pro)
✅
⬜
⬜
AMD Radeon R9 390 (Hawaii Pro)
✅
⬜
⬜
AMD Radeon R9 Nano (Fiji XT)
✅
⬜
⬜
AMD Radeon R9 M370X (Strato Pro)
⬜
⬜
⬜
AMD Radeon RX 5700 XT (Navi 10)
✅
⬜
⬜
AMD FirePro W9100 (Hawaii XT)
✅
⬜
⬜
Intel HD Graphics 4400 (Haswell)
🚫
✅
⬜
Intel HD Graphics 5500 (Broadwell)
🚫
⬜
⬜
Intel HD Graphics 530 (Skylake)
⬜
✅
⬜
Intel Iris Plus Graphics (Ice Lake)
✅
⬜
⬜
Nvidia Geforce GTX 1650 Max-Q (Turing)
✅
⬜
⬜
Nvidia GeForce RTX 2080 Ti (Turing)
✅
⬜
⬜
Nvidia GeForce RTX 2070 (Turing)
✅
⬜
⬜
Nvidia Geforce Titan X (Maxwell)
✅
✅
⬜
Nvidia Titan Xp (Pascal)
✅
⬜
⬜
Nvidia Geforce 1080 Ti (Pascal)
✅
✅
⬜
Nvidia Geforce 1070 (Pascal)
✅
✅
✅
Nvidia Geforce 1050 Ti (Pascal)
✅
✅
⬜
Nvidia Geforce 960 (Maxwell)
✅
⬜
⬜
Nvidia Quadro K6000 (Kepler)
✅
⬜
⬜
Nvidia Quadro P5000 (Pascal)
⬜
⬜
⬜
Nvidia Geforce 980M (Maxwell)
✅
⬜
⬜
Nvidia Geforce 960M (Maxwell)
✅
✅
⬜
Nvidia Geforce 750M (Kepler)
✅
⬜
⬜
Nvidia Geforce 650M (Kepler)
⬜
⬜
⬜
Apple Silicon M1
🚫
🚫
✅
Apple Silicon M2
🚫
🚫
✅
For a general explanation on Volume Rendering see Volume Ray Casting
In Scenery volumes are represented by Volume
nodes. But, unlike regular meshes, those volumes are not rendered directly, but together at the same time. This is coordinated by the VolumeManager,
which is both Hubable
and a Node
at the same time.
The sampling of each volume is done in the same shader. The shader is auto generated and specific to the amount and type of used volumes. Therefore, adding a volume may trigger a shader rebuild, which is automatically handled by the framework. For more info, see Volume Shaders and Uniforms.
NOTE: The scene graph subtree below a
Volume
node is scaled by the dimensions of the volume. Example: To move a sub node to the position of the voxel at (20,120,49), set its local position to (20,120,49).
Chapter 6 in Ulrik Günters PHD Thesis
Advanced Topic Volume Shaders and Uniforms
This page details how nodes are placed in the world coordinate system.
Nodes with attribute type Spatial can be positioned in the world coordinate system. By default, the spatial properties position
, rotation
, and scale
are used:
These properties are used by (Default)Spatial::composeModel
(or any class overriding this function) to construct the Node's model matrix. Model matrices are 4x4 matrices that determine the Node's transformation within the world coordinate system, and they use homogenous coordinates (if you want to know more about homogenous coordinates and their general application in computer graphics, check out the Realtime Rendering book and website by Tomas Akenine-Möller and colleagues, especially chapter 4 on Transforms).
If the Node has any parent objects, the world matrix of it is updated by (Default)Spatial::updateWorld()
. If a Node does not have a parent, the world matrix will simply be its model matrix. If it does have a parent, the world matrix will be the model matrix, multiplied with the parent's world matrix, such that hierarchical transforms become possible.
An important aspect to know about model and world matrices is the update cycle. Updates to the world and model matrix are handled asynchronously by scenery. This means any changes e.g. to the position, rotation, and scale properties will not be reflected immediately in the model and world matrix. Both matrices are updated in the background by scenery such that they are available to the renderer with the next frame rendered after the change of properties occurred:
Any manual changes to the world and model matrix will be overwritten by composeModel()
.
Again, do not rely on the model and world matrices to be up-to-date after changing transform properties.
As stated above, scenery by default takes the position, rotation, and scale properties into account when constructing a Node's model matrix. These properties were chosen as they are the most common ones used. However, you might feel the need to introduce additional transforms into the world matrix, such as skew. This is possible in two ways:
Setting wantsComposeModel = false
: This will cause scenery to not run the composeModel()
routine, but it will use whatever matrix you have provided as the Node's model
matrix property. The world matrix will still be the parent's world matrix times your custom model matrix.
Overriding composeModel()
: This is possible when introducing a new Attribute type, and will integrate your custom composeModel() routine within scenery's default update cycle.
As said, the model and world matrices are only updated before the next frame is rendered. This behaviour can be overridden as well, but is discouraged, as it goes outside of scenery's expected update cycle (inconsistencies should not be expected though, as the updateWorld()
method is only called as @Synchronized
method). Only use if strictly necessary, or for debug purposes.
The above example then changes to:
Which is what would be expected if updates to the matrices happed immediately.
Slicing and Cropping allow to view otherwise hidden, inner parts of a volume without tinkering with the transfer function or the data.
Slicing and cropping of a volume rendering is done by a SlicingPlane
which is assigned to the Volume
. Up to 16 SlicingPlane
can be assigned to a volume via the addTargetVolume
method. One SlicingPlane can be assigned to unlimited volumes. How those planes interact with the volume is governed by the slicingPlaneEquations
property of the volume. The possible exclusive options are:
None, no interaction between planes and volume. The volume will be rendered in full
Cropping, the rendering will be split in half by a intersecting plane. One half will be rendered, the other will not. Additional intersecting planes may reduce the rendered parts further.
Slicing, only a small slice around the plane will be rendered with full opacity. Multiple intersecting planes will each reveal the area around themselves.
Both, Both Cropping and Slicing are active. The area around the slicing plane will be rendered with zero transparency and below it the volume will be rendered regularly.
The SlicingPlane
node itself has no geometry. To make it perceivable or allow user interaction it needs to be attached to other nodes. The calculation for the cropping/slicing happens in world space, therefore volume and slicing plane transforms can and should be manipulated via the scene graph.
Scenery examples/volume/OrthoViewExample
Scenery examples/volume/CroppingExample
The network synchronization of scenery has two targets. Firstly, for CAVE setups, the scene needs to be rendered by multiple nodes from different perspectives, and secondly, multi user volume viewing sessions using networked computers.
The networking capabilities of scenery are enabled via VM parameters. The server has to be started with -Dscenery.Server=true
and the client needs to be pointed to the server via -Dscenery.ServerAddress=tcp://127.0.0.1
The server can be pretty much any scenery scene (maybe with some adjustments, see below). For a simple client use [SlimClient].
Additional VM Parameters
For server and client
scenery.MainPort
- Port for the main channel. Default: 6040
scenery.BackchannelPort
- Port for the Backchannel. Default: 6041
[SimpleNetworkExample]
[NetworkVolumeExample]
[SlimClient]
Once the server application is started, it begins scanning and registering the scene graph for objects which implement the [Networkable] interface. By default these are all nodes connected to the root node and their attributes.
When a client requests a resync, all registered objects are serialized and sent to all connected clients. Upon receiving an object, the client checks whether it has seen one with that ID before, and if not, adds its to its scene. If the object's parent has not been synced yet, it is put in a waiting queue and added once the parent has been synced. If the object is already there, the existing object is simply updated with the values of the new object.
If the server registers a change in a registered object, it sends an update with the new object to all clients.
Relevant Classes
[Networkable]
[NodePublisher]
[NodeSubscriber]
The [SlimClient] is a scenery application that offers a mostly bare scene into which scenes from a server might be loaded. In addition to that it offers its own camera if desired. If the VM parameter -Dscenery.RemoteCamera=true
is set, cameras from the server scene are disregarded and a local one is used. If it is not set or set to false, the server scene has to provide a camera.
Camera
[Camera.wantsSync] - allows disabling the registration of the camera for syncing. Default: true
Material
[DefaultMaterial.syncronizeTextures] - if set to true tries to transmit the textures over the network. Should be disabled for large textures. Default: True
Volume
[Volume.forNetwork] - Volume creation method for networkable volumes. For more see section below.
There are two ways to add objects for the client. The default way is to take the deserialized object from the server and simply add it to its corresponding parent. This works for most cases. But some objects require the constructor be run locally or with specific parameters. For those cases there are the getConstructorParameters()
and constructWithParameters(parameters)
functions in the Networkable
interface. No matter how the object was created, afterwards the update method is called again with the same object from the server. The update happens after the object has been added to the scene graph.
Example
[PointLight]
Every Networkable
object needs to implement the update
method. In the update
method a fresh copy from the server is given as parameter and should be used to copy over relevant values to the client-side object.
Properties which are marked @Transient
are not serialized and therefore not available on the client-side copy of a object from the server. If those properties need to be synced anyway, they should be returned in a serializable form in an overwrite of the getAdditionalUpdateData()
method. This method is called on the server at the serialization and the results are transmitted next to the object. The transmitted result of the getAdditionalUpdateData()
function is another parameter of the update function. Special attention has to be paid to parent classes which might also have additional data. These data have to be handled manually in the overriding methods.
References to other Networkable
objects besides parent/child relations of the scene graph and node/attribute relations can't be synced automatically. For those the Networkable.networkID
needs to be saved as additional data on the server side and resolved in the update
method with the getNetworkable
lambda parameter.
Attention: The first Update
If the object was not created with a local constructor but with deserialization, the first update will be "with itself". This might be a source of sneaky bugs when for example lists which were cleared on the "client" object are now also empty on the "server" object.
Examples
[DefaultMaterial] - AdditionalUpdateData [DefaultSpatial] - GetNetworkable, First Update
For the server to register a change, the modifiedAt
property needs to be updated. For convenience the updateModifiedAt
method simply may be called like in [DefaultSpatial].
It might be needed to prevent the server from registering an object. If wantsSync
returns false, the object will be skipped in the registration as in [Camera].
For the server to register objects out of the scene graph, they need to be returned by getSubcomponents
. The registration of attributes is handled that way for [DefaultNode]. One could extend this for rarely changing data objects to reduce the amount of times they are transmitted.
At the time of writing, serializing lambdas is wonky. Don't expect it to work. An alternative is to newly generate them on the client side in the constructWithParameters
or update
method.
Syncing volume data is currently not possible. Therefore the data has to be available locally.
To initialize a volume node with sync support, the Volume.forNetwork
create method should be used. It takes an implementation of the [VolumeInitalizer] interface.
There are already two implementations in this repo. The first is part of scenery itself [VolumeFileSource].
VolumeFileSource
has two parameters with each two options:
path
Given(val filePath: String)
- for fixed file paths that are the same on every machine (eg. network drive or something like "C://Volume")
Settings(val settingsName: String = "VolumeFile")
- the file path is taken from the VM parameter "-DVolumeFile=$path$" of each individual application
Resource(val path:String)
- the volume is a resource reachable by the java loader
type
TIFF
- tiff file format
SPIM
- Spim xml data format
The other implementation is [IJVolumeInitializer], which can be found in the [NetworkVolumeExample]. It takes a path/url and opens it using the ImageJ framework.
Examples
[NetworkVolumeExample]
For developing with scenery, or scenery itself, it's quite useful to have an IDE that supports you in your coding tasks. We recommend using IntelliJ as IDE, which is available as free Community Edition from Jetbrains. In case you are an Eclipse user, there is a Kotlin plugin available in the Eclipse market place that can be used for development with scenery.
scenery and IntelliJ require an installed Java Development Kit (JDK), with version 21 and upwards being supported. scenery is fully compatible with OpenJDK, which you can download at https://adoptium.net/.
The git repository for scenery can be found at https://github.com/scenerygraphics/scenery, you can clone the repository to your drive by running
Should you already have a Github account and an SSH key set up with that account, you can also use
That'll clone the scenery repository to a folder named scenery
on your hard drive.
The scenery repository consists of the following major directories:
src/main/kotlin/graphics/scenery
contains the main scenery source code.
src/test/tests/graphics/scenery/tests/examples
contains example code and small applications for getting started and for demonstrating features of scenery.
src/test/tests/graphics/scenery/tests/unit
contains unit tests that are automatically executed when scenery is built to ensure everything is still working. scenery uses the JUnit testing framework for that.
src/main/resources/graphics/scenery
contains images, shader files and other files that are not Kotlin or Java source code.
Furthermore, there is an artwork
directory, containing some scenery logos.
As a build system, scenery uses Gradle, which stores all project information, such as dependencies, in the file build.gradle.kts
.
To build scenery on the command line, change to the scenery directory and run
This will download the Gradle version required automatically, followed by all dependencies required. Then, it will build the scenery JAR files in the build/libs
directory. The first build will take a while because all dependencies are downloaded from teh interwebs. When the build is successful, there should be multiple files there, named scenery-[VERSION].jar
, scenery-[VERSION]-tests.jar
, andscenery-[VERSION]-sources.jar
From the scenery repository directory, you can then run
... and the TexturedCubeExample should (semi-)magically show up.
First, import the project in IntelliJ:
Click File > Open and navigate to the scenery directory,
Open the file build.gradle.kts
there. When IntelliJ asks to open as project or as file, select Open as Project,
IntelliJ will now resolve and download all dependencies of scenery, which might take a while when you are doing this for the first time.
When IntelliJ is done importing, navigate to the examples directory with the directory browser on the left. The directory is src/test/tests/graphics/scenery/tests/examples
a. You can alternatively switch the directory tree to Packages mode and navigate to graphics.scenery.tests.examples.basic
.
Find an example you want to run, e.g. TexturedCubeExample
, open the file, and click on the small green Play button that appears next to the main routine in that file:
The example should now compile and magically show up on screen.
The examples of scenery are a good starting point for exploring features or developing your own applications. Tinker around and modify them to your needs.
Meshes are, as you probably know, a collection of polygons. If you have never heard of the concept, do not worry, we will explain it on the fly. In case you still want to learn more, here is a very visual article for you: https://conceptartempire.com/polygon-mesh/ – also virtually any introductory book on computer graphics contains this topic.
Lets jump right into it with an example! Say you would like to render the Pyramid of Cheops. Fortunately, we don't need thousands of slaves (or aliens) to do that. More useful in our case is a class which inherits from the Mesh class, for example like this:
Before we can fill the Pyramid with useful information, we need to take a look at the Mesh class itself. Mesh
inherits both from HasGeometry
and Node
. Essentially this means that Mesh is both a Geometry and a Node in the scene graph. About the latter we do not need to worry until the scene setup. HasGeometry, however, is essential so lets have a closer look. Of all its attributes, vertices
is the most important for now, because it stores the, you guessed it, vertices of our Polygon Mesh in a float buffer. For our Pyramid the vertices look like this:
Lets store these coordinates first as vectors:
Now we need to make a Polygon Mesh out of these. Scenery works by default with triangle meshes. So, our pyramid needs to be stored in a construct similar to:
These triangles are stored vertex by vertex in counterclockwise direction, at least if you want to render your geometry with a normal triangle geometry. Lets have a look at our first triangle: A, B, C.
It does not matter in which order you store the vertices as long as the order is counterclockwise from outside the object. Consequentially, [A; C; B]
, [B; A; C]
, and [C; B; A]
are the options you have in this case. Lets do this for all our triangles:
Now we are almost there. In the next step we will allocate and fill our vertices buffer with our vertices and then calculate a normal vector for each triangle. Fortunately, this is done by the function recalculateNormals()
, note that the vertices must be stored in the right (counterclockwise) order for it to work.
Congratulations! You just wrote your first Mesh. If you wish to render the Pyramid in a different manner, consider the enum GeometryType
which gives you a lot of options, e.g. rendering only the vertices. Otherwise, there it is, the mighty Pyramid of Cheops:
Instancing is a very cool feature, added to make the process of rendering a large number of objects a lot quicker, by submitting the object geometry only once and rendering it many times in a single draw call: Imagine you're making an educational movie and you want to animate thousands of blood cells moving through vasculature. If you would like to render all of the thousands of blood cells separately, which basically look the same, it would take a great amount of draw calls and therefore, resources. So instead, you create only one copy of each type of blood cells and render many instances of it in one go.
On this page, we describe an example on how you can do it in scenery. The complete example can be found at src/test/tests/graphics/scenery/tests/examples/advanced/BloodCellExample.kt
.
First, we need to create a mesh:
Important here is the, you guessed it, instancedProperties
. So lets have a look at what this actually is. instancedProperties
is a parameter of the node class (please do not get confused, the mesh class inherits from the node class):
As you can see, we store the model
matrix of our erythrocyte in the instanced properties. The model matrix is the matrix governing how this object is positioned in 3D space.
Now we can proceed. We will map through 40 erythrocytes and make them children of a cell container:
Two things are of main interest here: First we have the instancedProperties
again. But this time with the world matrix, which brings our individual model into world space. Then secondly, instances
which is a CopyOnWriteArrayList
of Nodes. Meaning, it will copy the mesh data from the master object (erythrocyte) to each of the erythrocytes 0..40. Now all that's left to do is adding our erythrocytes to a scene:
Now you should be able to render the objects in much less time.
Curious what piece of code is slowing you down?
scenery includes support for the . Remotery is a simple profiler that can be used from a browser, either on the same machine, or remotely. Here's the series of steps required to profile a scenery-based application:
Clone the Remotery repository and open vis/index.html
in a browser. This is the client that connects to the application and visualises profiling results.
In scenery, set up profiling by either handing the (SceneryBase
-derived) application the scenery.Profiler=true
system property on startup, or adding the a new Remotery
instance to the Hub
:
A certain piece of code can then be wrapped in begin()
and end()
blocks of Remotery. The profiler object itself can be queried from the Hub, if available in that routine:
When connecting to the Remotery instance on the web browser, you'll see the profiling results, which are updated in real-time:
This page describes input bindings for gamepads, how to add and modify them, and lists mappings for different gamepad controllers.
Gamepad axis and buttons are handled differently in scenery. Axis are used for analog input and can e.g. lead to movement, or rotation of the camera, or of an object. Buttons in turn can trigger simple behaviours.
A GamepadClickBehaviour
can be used to e.g. toggle functionality:
This snippet has been taken from ProteinComparisonExample
. When this behaviour is triggered, another object in the scene will be highlighted. In order to bind this behaviour to a button on the gamepad, run the following:
This snippet adds the toggleProteins
behaviour defined above to the inputHandler, gives it the name "toggleProteins", and binds it to the right directional pad button. In order to remove the behaviour again from the input handler, use
These behaviours can be used to either move or rotate nodes. With scenery's default key bindings, the left-hand stick is bound to movement in the plane, while the right-hand stick is used to look around. These bindings of course can be modified:
This snipped, again from ProteinComparisonExample
, removes the default gamepad_camera_control
behaviour, and adds a new behaviour bound to the RX and RY axis, that'll rotate the node bound to activeProtein
. Movement and rotation controls are always active and should be bound to GamepadButton.AlwaysActive
.
Vertical movement is not part of the default input bindings, but can also be easily added:
This snippet binds vertical movement to the Z axis of the controller.
Buttons printed in italics are controller axis, they cannot be used for GamepadClickBehaviours
, but only for GamepadMovementControl
or GamepadRotationControl
.
How to run scenery-based applications on a distributed setup, such as a CAVE or PowerWall.
Rendering with distributed setups is still experimental, so this document is very likely to change in the future.
For all machines of the setup:
For the control node:
Optionally:
a remote desktop solution, such as VNC might help with debugging the setup
In order to run distributed applications, all machines need to access to two network shares, which do not necessarily need to reside on the same machine:
a share containing the scenery application directory, including all JARs built, a suggested structure for this is:
# contains the scripts from the cluster-scripts repository
path/to/base
# contains the scenery git repository and JARs
path/to/base/scenery
a share containing the data to be loaded.
The scripts from the cluster-scripts
repository need to be adjusted for your local setup, in particular the username and password for the rendering node accounts need to be changed, as well as their names, and the name of the network share used. Go through the scripts carefully, they contain comments in places that need to be changed and are very short.
Next, the pom.xml
file from the scenery repository needs to be imported into IntelliJ on the control node. Open IntelliJ, select the file via File > Open
, and follow the instructions.
In order for scenery to know about your screen configuration, a screen configuration YAML file is required, such file looks like this:
We assume that all projectors have the same resolution. When launching on each of the nodes, the appropriate screen is determined using the match
block. Here, Property
means the appropriate screen is determined using the JVM system property scenery.ScreenName
. This property gets set by the run-cluster.bat
and run-test.bat
scripts. An arbitrary numbers of screens is possible, but the YAML file and run-cluster.bat
script need to be adjusted accordingly.
In IntelliJ, find DemoReelExample
. This example can be found in the project in the src/test/tests/graphics/scenery/examples/cluster
directory. In the example, make sure that the IP given for TrackedStereoGlasses
matches that of your tracking system, and the YAML file given matches the name of your screen configuration. Then, click the Run
button to run this locally and verify all data is found.
Afterwards, go to Run > Edit Configurations...
to adjust the parameters of this test in order for it to run on all nodes. The VM options of DemoReelExample
should look like the following:
In the Before launch part of the window, two additional steps need to be added:
The Maven goal package needs to be run in order to build all JAR files and make them available to the other nodes.
The run-cluster.bat
script needs to be run to launch scenery instances on the projection nodes. The External Tool setup for that script should look like the following image:
After this is complete, DemoReelExample
can be run again and should now launch on all the nodes.
In DemoReelExample
, you can use the WASD keys of the keyboard to move around. You can also keep an Xbox or PS4 gamepad connected to the control node to use for movement. Further keybindings are:
In order to quit the demo on all nodes, use the killall-java.bat
script.
Scenery uses the (BBV) for rendering volumes. Currently, there is no documentation on BBV, therefore we try to explain both a bit in this chapter. However, this chapter will only provide a brief overview with a focus on shader generation and uniform access.
As mentioned in , the Shader of the VolumeManager
is auto generated according the type and amount of to-be-rendered volumes.
Once the VolumeManager
decides it needs to rebuild its shader, it starts with collecting all needed code snippets. The shader code is saved in multiple glsl resource files and annotated with preprocessing commands for the joining process. Also, the (hardcoded) per volume uniform names are collected. (See also next chapter) A code snippet with its associated uniforms is called a Segment
. The segments along with information about the used volumes and other things are passed the MultiVolumeShaderMip
constructor, which is part of BVV.
This constructor joins the segments to a complete version of the shader code. This joining executes the earlier mentioned preprocessor commands, and the repetitions required to render multiple volumes. The result is saved internally as a SegmentedShader
. This shader is still not compiled.
Once BVV plans to render the VolumeManager
node for the first time, it has to compile the shader first. But before that happens the VolumeShaderFactory
transforms the code one last time. Among other things, the uniforms which currently are strewn all over the code are extracted and placed in a UBO for Vulkan compatibility. Then finally a shaderPackage with the final code is given to BVV to compile and use.
NOTE: A breakpoint placed at
VolumeShaderFactory.construct(..)
before thereturn
is also the optimal place to extract shader code for manual debugging.
To set general uniforms in the volume shader, they simply need to be added to the shaderProperties
of volumeManager
. Eg:
and somewhere in a shader snipped a corresponding:
To set a uniform per volume it needs to be declared as a per volume uniform first. If we want to have a Vector3f slicingPlane
uniform for each volume, which will be used in the sampling part of the shader, we need to add it to the corresponding segments key lists in the VolumeManager
. (At the time of writing (11.03.2021) this would be line 260 and 264 because there are two kind of sampling segments.)
Then to set them we use the .setCustomUniformForVolume(..)
of the current shader. In our example we could add our code to the loop over renderStacksStates
like this:
If the target uniform is an array or matrix, .setCustomFloatArrayUniformForVolume(..)
has to be used.
To profile scenery, a 3rd-party profiler of choice can be used as well. Good candidates are IntelliJ's integrated profiler and Java Flight Recorder. Since Oracle has changed the licensing terms of the JDK recently, we advise to use OpenJDK, and an open-source build of the Java Flight Recorder, which can be used from Java 11 onwards. A good tutorial how to set that up can be found .
current version of Java 11 installed from
up-to-date graphics driver (currently only tested with Nvidia Quadro cards, get the most up-to-date drivers from )
Vulkan SDK installed for debugging,
psexec for remote execution installed, from
current installation of JetBrains IntelliJ Community Edition for running the examples,
a clone of the scenery git repository,
scenery's cluster scripts, from
Install the above requirements on the machines, and make sure that processes can be remotely launched with psexec
, see e.g. for details on psexec setup.
Should you experience any issues, please feel free to contact us on the , or .
On controller
Identifier in scenery
A
GamepadButton.Button0
B
GamepadButton.Button1
X
GamepadButton.Button2
Y
GamepadButton.Button3
LB
GamepadButton.Button4
RB
GamepadButton.Button5
View (⧉)
GamepadButton.Button6
Menu (≡)
GamepadButton.Button7
Directional Pad
GamepadButton.PovUp
, GamepadButton.PovDown
, GamepadButton.PovLeft
, GamepadButton.PovRight
LT/RT (Analog shoulder buttons)
Component.Identifier.Z
Left analog controller
Component.Identifier.X
, Component.Identifier.Y
Right analog controller
Component.Identifier.RX
, Component.Identifier.RY
Button
Function
Shift
1
Go to the Bile scene
Shift
2
Go to the C. elegans scene
Shift
3
Go to the Drosophila scene
I
, K
Rotate scene up/down
J
,L
Rotate scene left/right