April 13, 2021

Evidence mounts that vessel safety will benefit with technology enhancing the bridge lookout.

Maritime safety regulations have – and for valid reasons – traditionally been strict and have often developed in ways that could hold up innovation. Almost two decades after the remedy of ‘goal-based’ standards and ‘technical equivalence’ arrived at the IMO, rules based on risk assessment and functionality continue to face an uphill battle to redefine a safety culture based on ‘dos and don’ts’. The use of predictive algorithms to automate or semi-automate aspects of ship operations is already accepted as a way of enhancing safety, as well as operational efficiency. Dynamic Positioning, for example, has become a mainstream vessel control technology whose advancing capability not only responds to but anticipates conditions, based on accumulated data.Predictive algorithms are now also being used to enhance safety in other areas of ship control, including maneuvering, trim optimization and braking. However, regulatory red flags are raised when the same logic is applied to the 3D visualization and situational awareness techniques that can ‘see’ better than human beings and, using accumulated data, assess the situation with greater consistency.
Lawmakers say that automation should only be favored over the human alternative if safety is “equal or better”. 

However, establishing the basis for the comparison is no easy matter. The conventions on Safety of Life at Sea (SOLAS), Standards of Training, Certification and Watchkeeping (STCW) and Regulations for Preventing Collisions at Sea (COLREG) are descriptive when explaining the relationship between bridge crew and ship. Human performance varies by individual but also by health, alertness and mood, time of day and conditions. One area where the “equal or better” formula may be tipping in favor of automation covers the role of lookout. As the purpose of the navigation rules is to prevent collisions, it follows that the purpose of the lookout is to ensure the safety of the ship and any other vessels in the vicinity by relaying information to the officer of the watch in an orderly fashion with the best possible accuracy.

Modern SOLAS ships have mandatory navigational equipment for assisting in determining the position, heading and detecting the relevant obstacles in the surroundings. In practice, the vessels typically have radar, gyrocompass, ECDIS, GNSS-based positioning system and an AIS receiver. In addition to these devices, the lookout uses his or her own senses, mainly eyes and ears to perceive the surroundings.
Human eyesight performance depends on the eye health and clarity of vision, but also light and obstacles (such as fog) in the line of sight. Eyesight, hearing and other faculties are actually considered quantifiable using STCW conventions, but variations in human performance are inevitable – no matter how vigilant. Even setting aside individual strengths and weaknesses, external factors limit the ability of any human lookout to detect targets from the bridge. When visibility is considered ‘perfect’, the curvature of the Earth limits the maximum range of human vision, for example.  

If this observation appears to be of hair-splitting proportions, it should be noted that a lookout with ‘perfect’ vision positioned at a height of 30 meters would not be able to see another 30-meter high structure at all if it were 39.1 km away. However, the vision of a real lookout would not be perfect: based on the minimum eyesight requirement for deck officers, the minimum angular resolution of a human lookout is 2 ‘arcminute’. The same 30-meter high observer would actually only see a structure of the same height at a distance of 31.7 km. In real conditions, the lookout’s visibility is also influenced by fog, haze, rain, smoke, etc. The range of visibility will therefore depend not only on light conditions, but on the properties of the target.  For some, these limitations and areas of uncertainty are simply part and parcel of the real world. Furthermore, camera performance is also affected by the real world – by air quality, humidity, vapor, light conditions, contrast, color and the reflectivity of the object. Again, lens quality and focus can vary, while mechanical vibration may also compromise performance.  

Digital and connected real world

However, in the real world where ships operate today, the SOLAS navigational aid equipment with which humans interact is already digital. Looked at from the functional, rather than human perspective, the lookout performs the ‘sensor fusion’ of combining visual, radar, chart inputs, then offers an overall ‘manual’ assessment of the situation.

To achieve the same level of performance with a machine-based lookout, it is first vital to prove that computer vision can achieve adequate level of performance at the boundary conditions. Therefore, the main tasks to demonstrate equivalency by means of visual technology are: [1] Detection of a target of which minimum projected dimension extends 2 arcminutes above the horizon in good visibility conditions; [2] Detection of target of which minimum projected dimension extends more than 2 arcminutes in the field of view at the visibility range in decreased visibility conditions; [3] Detection of a target in front of the horizon that extends 2 arcminutes in the minimum projected dimension in good visibility conditions
If the above can be demonstrated, the minimum level of a lookout – that is, detecting the targets – could be shown to be ‘as good or better’ than the human.

Trials and learnings

In order to verify theoretical assertions, a camera-operated awareness system was tested around Helsinki archipelago, using a set up that worked with a full HD PTZ-camera and 30x optical zoom. The horizontal field-of-view of the camera with maximum zoom settings is 2.3°, with the camera installed at a height of 10 meters.  The vessel where the camera was mounted was stationary during the experiment.
Two pleasure crafts described were used as detected targets. The boats were navigated to a specific distance from the vessel where the camera was mounted. The weather was clear, with 4 m/s wind from north east. The air pressure was 1019 hPA and visibility was good. The time of day during the experiment was 04:00 am to 06:00 am. Using the conventional approach, it would be expected that the ‘Small pleasure craft’ would be detected at around 5.8 km. However, using the new set-up, the boat was still detectable at 6.8 km.

In sum, Table 1 shows the camera-based detection distances for various marine-relevant targets and compares them to the estimated detection distances of a human lookout. It offers a detailed account of the way the camera setup can achieve the equal or better performance than the human eye in visibility conditions that are given as good.

Table 1

Modern perception technologies also achieve performance beyond human perception capabilities for other reasons. For example, infrared (IR) camera technology can detect targets in decreased visibility conditions that a human with binoculars cannot.

Short wave infrared (SWIR) cameras enable detecting other vessels even through fog and long wave infrared (LWIR) cameras enable detecting other vessels, debris and floating obstacles even in pitch black conditions.

It is important to acknowledge that these high-end technologies come with costs attached, meaning that any claim to be ‘better than a human’ should be weighed up from the financial as well as the practical standpoint. In doing so, however, it is worth considering that automated advantages are cumulative: the human lookout needs to process, remember and track targets detected visually and reconcile this information with that coming from the AIS and ARPA radar.

This post appeared first on MarineLink News.

Comments are closed.