Hint
This page is only partially interactive. Since this is a static HTML page, only front-end interactivity works. This means you can click buttons, but the relevant python-level responses to those actions won’t occur.
Segmenting images, labelling landmarks, and drawing bounding boxes¶
ipyannotations also supports image annotations. It currently has support for:
drawing polygons onto images
annotating points in an image
drawing bounding boxes on an image
All of the image annotation widgets share some parameters:
canvas_size
, the size of the canvas on which the image is drawn, in pixels. The format is (width, height), and the default is (700, 500)classes
, the types of objects you’d like to annotate. This should be a list of strings. The default isNone
.
All image annotation widgets also share some common UI elements to make annotating easier, such as brightness / contrast adjustments. They also all respond to the hotkeys 1 – 0 to select the class of the item you are labelling.
All image annotation widgets also have an “edit mode”, which allows adjusting existing annotations by click-and-dragging any given point. All widgets also support submitting data using the “Enter” key.
Drawing polygons around shapes of interest¶
The PolygonAnnotator
is designed to draw the outlines of shapes of interest.
This can be useful when you are interested in identifying exactly where an
object is in an image, i.e. if you are training an image segmentation model.
from ipyannotations.images import PolygonAnnotator
widget = PolygonAnnotator(options=["eye", "mouth"])
widget.display("img/baboon.png")
widget
The data in this widget has the following format (note that you can call help
on the data
property of a widget class to see this text):
- property PolygonAnnotator.data
The annotation data, as List[ Dict ].
The format is a list of dictionaries, with the following key / value combinations:
'type'
'polygon'
'label'
<class label>
'points'
<list of xy-tuples>
Annotating key points, for counting or key-point regression¶
Key point detection is often used when building augmented reality algorithms like snapchat filters. To do so, you usually annotate certain points like a person’s eyes, nose, and mouth, and the model learns to predict those. You can then place things like funny noses relative to the keypoints in an image.
Key points can also be useful if you are merely interested in the location or count of an object, not the size or shape.
For example, if you were interested in counting the pigeons in a given photograph (starting with this one taken at the Kazakh pavillion of the VDNKh):
from ipyannotations.images import PointAnnotator
point_widget = PointAnnotator(options=["pigeon"])
point_widget.display("img/vdnkh.jpg")
point_widget
The data for this PointAnnotator
widget looks like:
- property PointAnnotator.data
The annotation data, as List[ Dict ].
The format is a list of dictionaries, with the following key / value combinations:
'type'
'point'
'label'
<class label>
'coordinates'
<xy-tuple>
Annotating bounding boxes¶
Lastly, a common computer vision task is to place a bounding box around an object of interest. This is less precise than polygon segmentation, but often provides enough information for a given application.
To annotate bounding boxes, simply use the BoxAnnotator
:
from ipyannotations.images import BoxAnnotator
box_widget = BoxAnnotator(options=["eye", "mouth", "nose", "cheek"])
box_widget.display("img/baboon.png")
box_widget
The data for the BoxAnnotator
widget looks like:
- property BoxAnnotator.data
The annotation data, as List[ Dict ].
The format is a list of dictionaries, with the following key / value combinations:
'type'
'box'
'label'
<class label>
'xyxy'
<tuple of x0, y0, x1, y1>