You will need to read the original paper on alternating decision trees to see how the scores are calculated (essentially, a boosting algorithm that produces confidence scores is used to generate an ensemble of rules that are combined into an alternating decision tree, i.e., a form of option tree).
Making a classification based on this tree is very simple: follow all the paths that apply given the tests stated along the edges connecting the splitter nodes and the prediction nodes and sum up all the numeric values in the prediction nodes that you encounter. The prediction nodes are the rectangular ones.
When you go about finding all applicable paths, start from the root node of the tree and work your way down. This is the same as in a standard decision tree. However, when you reach a prediction node, you will need to follow all the edges extending downwards from that node.
In your case, only the root prediction node has multiple branches (aka options) that you need to follow. (In this particular alternating tree, there are essentially four decision trees that are joined at the root node.)
Once you have calculated the sum of the values of all applicable prediction nodes, all you need to do is check whether this sum is positive or negative. In your case, if the sum is positive, the class to predict is “tested_positive”. Otherwise, i.e., if the sum is negative, the class to predict is “tested_negative”. This is indicated by the additional information given in the root node.
There is also a high-level description of alternating decision trees in our data mining book.