next up previous contents
Next: User Actions as Click Up: From the Speech Act Previous: From the Speech Act

Menus: User Utterances and Actions

In the previous section, a general classification of human-computer relationships could not be delivered, but it was shown that different people may understand their actions in human-computer interaction in a different way, depending on the user's attitude towards the computer.
What follows is a closer examination of the interaction, examining in particular the "utterances" a user makes. Since most computers do not understand "natural" language, human-computer communication takes place via different devices which were described in section 3.1.1.
Mouse input can be differentiated between single clicks and double clicks.

  • Single clicks, for example in order:
    • to select an object,
    • to select an operation, for example an action listed in a menu, be it from the main menu bar on the Macintosh or with the right mouse button of a Windows context-sensitive menu,
    • to select a control.
  • Double clicks, for example in order:
    • to open a file,
    • to open an application,
    • and to open a folder.46


 
Figure 5: A MacOS Hierarchical Menu


In most cases, an object is selected wih an initial mouse click, and then a menu item is selected, which Apple calls the noun-verb paradigm (286).47 As an example, a user selects a word in a text processing software such as Microsoft Word and then selects the menu item "Cut". The first click could be regarded as an utterance such as "Look at this object", while the second could be translated to "Cut this object". Both sentences could also be understood as one single sentence, "Cut the object I have selected".48 In this case, the text processing software is the conversation partner. The first selection rather than the second could also be regarded as a gesture such as pointing on an object with a finger (Schmauks 124).

There are four different types of menus:

  1. pull-down menus ("drop-down menus" at Microsoft; see Figure 3),49
  2. hierarchical menus (see Figure 5),
  3. pop-up menus50 (see Figure 7),
  4. contextual menus (see Figure 8).

Another example is saving a file: here, the object is the file represented by a window. Since several windows may be opened in one application (for example several letters in a text processing application), the appropriate window has to be selected if it is not the active window. If the window of the file that has to be saved is the only open file in the application, no selection has to be made, since the computer "knows" that a command can only refer to this object. This utterance could be translated to "Save the active file".


 
Figure 6: A MacOS Dialog Box: "Save as..."


Furthermore, several operations can be applied to an object without selecting the object between the operations. Figure 6 shows a "Save as..." dialog box which is displayed after an appropriate operation has been selected (or a keyboard accelerator has been chosen), the user is subsequently offered several options of what to do with the file.
A double click could be regarded as a special form of a single click, since the first click is a selection, and only the time frame in which the second click occurs lets the second click become a different action. In most cases, a double click is performed in order to open an object.
Another special form is the drag 'n' drop of an object: the user presses the mouse button in order to select an object but does not release the mouse button; instead, the object is dragged to another location and the mouse button is released when the desired location has been reached. File icons may also be dropped onto the icon of an application in order to open it ("Take this object and open it with this application"). However, again the object has to be selected first in order to perform the desired action, and, again, this action could be understood as a gesture (Schmauks 124).
To sum up, a difference can be made between two types of utterances made by a mouse input:

  • selecting an object,
  • selecting a command in order to define what has to be done with this object.

Keyboard input is somewhat more complex. Obviously, there is a difference between entering a file name in a dialog box and writing a letter in a text processing application. However, typing a letter could be compared to dictating this letter to a secretary. To open Microsoft Word, for example, could be understood as calling one's secretary, while naming the file in a dialog box takes place on another level of interaction, for example similar to the secretary asking to whom the letter should be sent. When typing a letter users do not really communicate with the computer on the first hand just as the secretary does not understand the speaker's utterances as referring to her. Thus, this kind of keyboard input will not be examined in the following; instead, keyboard input which is used in order to interact with the computer itself will be analyzed.


 
Figure 7: A MacOS Popup Menu


There is a second form of keyboard input which is different from entering a file name in a dialog box. Some forms of keyboard input may substitute mouse input; pressing CTRL and C means "Cut" in Microsoft Windows for example. Thus, a distinction be made between two types of utterances made by keyboard input:

  • using a keyboard accelerator instead of a menu selection,
  • entering information required for the exchange.

To sum up, three forms of user input and user utterances have to be distinguished:

  • selecting an object ("Look at this object"),
  • selecting an operation ("Do the following with the object selected"),
  • entering information required for the exchange (for example, "Name the object X").

Mention must be made of the fact that operations are predefined, whereas entering information is rather free. A user may name a file whatever comes into her mind (albeit restricted by the number of characters allowed), but it is not possible to apply an operation to an object that is not present. Similarly, only those objects can be selected which exist in the system's perception, respectively, those objects that the user can "touch".


next up previous contents
Next: User Actions as Click Up: From the Speech Act Previous: From the Speech Act

Thomas Alby
2000-05-30