The continuous monitoring of a shore plays an essential role in designing strategies for shore protection against erosion. We present our own solution that is able to precisely extract the coastline from SAR image data even if it is not recognizable by a human. Our solution has been validated against the coastline's real GPS location during Signate's competition, where it was runner-up among 109 teams across the whole world.
We present a new version of YOLO extended with instance segmentation called Poly-YOLO. In comparison with YOLOv3, Poly-YOLO has only 60% of its trainable parameters but improves mAP by a relative 40%. Poly-YOLO performs instance segmentation using bounding polygons. The network is trained to detect size-independent polygons defined on a polar grid.
We present a conceptually simple yet powerful and flexible scheme for refining predictions of bounding boxes. Our approach can be built on top of an arbitrary object detector and produces more precise predictions. Due to the transformation of the problem into a domain where BBRefinement does not care about multiscale detection, recognition of the object's class, computing confidence, or multiple detections, the training is much more effective. It results in the ability to refine even COCO's ground truth labels into a more precise form. The process of refinement is fast, able to run in real-time on standard hardware.
By a car license plate recognition, we mean a software system processing images and providing an alphanumeric transcription of car plates included in an image. We divide the task into four sub-tasks: license plate localization, license plate extraction, characters segmentation and characters recognition. The build of the application has been given to your commerce partner under exclusive license.
We focus on visual tracking in night-recorded movies capturing instances of insects. The goal is to realize instance tracking, i.e., emphasize correct matching between identities. We propose our pattern tracking mechanism based on F-transform and implement a user-friendly software to handle the movies.
We refer to Signate 'Tobacco detection and classification' competition, where we finally ended fourth of more than one thousand registered competitors. The competition was divided into two phases, where the first one served as a development phase, i.e., participants were allowed to submit up to three submissions per day to see the score. The second phase, realized after the first one, then determined the final leaderboard using new (unseen) data over the selected submission. In total, the training dataset included approximately 26000 cigarette boxes, which had to be detected in the shelf images and correctly classified. The classification was aimed to link each single cigarette box with one of the 223 predefined classes.
Training large Computer Vision models require large annotated datasets of images which may be costly to assemble manually. To avoid this problem, we created a Synthetic Data Generating Framework based on Unreal Engine which can generate million photorealistic images in a day. Together with photorealistic images, our framework can produce detailed automatic annotations with very minimal overhead.
In cooperation with the department of biology, we have developed algorithms for the classification of dragonflies. Correctly monitoring the population of different dragonflies species is vital for evaluating human impact on the environment. We have utilized state of the art neural network approach.
We present a cascade of filters based on a fuzzy representation of images. This representation captures the uncertainty underlying in the intensity of a pixel by means of a fuzzy set. Our approach provides similar results as standard ones with a significant reduction of the computational time.
The work is based on our pattern matching algorithm using F-transform and shows how an arbitrary pattern can be detected in a movie in real-time using low-powered hardware.
We extend abilities of the Parrot Bebop 2 drone equipped with a front camera by an object tracking application. The drone streams video into a mobile phone where the proposed application processes the movie, tracks the object and automatically controls the drone movement.
We propose a new hybrid image compression algorithm which combines the F-transform and JPEG. We show the hybrid algorithm achieves significantly higher decompressed image quality than pure JPEG.