EP3151195B1 - Optisches verfolgungssystem und verfahren zur berechnung der position eines markerteils in einem optischen verfolgungssystem - Google Patents

Optisches verfolgungssystem und verfahren zur berechnung der position eines markerteils in einem optischen verfolgungssystem Download PDF

Info

Publication number
EP3151195B1
EP3151195B1 EP15799209.0A EP15799209A EP3151195B1 EP 3151195 B1 EP3151195 B1 EP 3151195B1 EP 15799209 A EP15799209 A EP 15799209A EP 3151195 B1 EP3151195 B1 EP 3151195B1
Authority
EP
European Patent Office
Prior art keywords
coordinate
pattern
denotes
matrix
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15799209.0A
Other languages
English (en)
French (fr)
Other versions
EP3151195A1 (de
EP3151195A4 (de
Inventor
Hyun Ki Lee
You Seong Chae
Min Young Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koh Young Technology Inc
Industry Academic Cooperation Foundation of KNU
Original Assignee
Koh Young Technology Inc
Industry Academic Cooperation Foundation of KNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koh Young Technology Inc, Industry Academic Cooperation Foundation of KNU filed Critical Koh Young Technology Inc
Publication of EP3151195A1 publication Critical patent/EP3151195A1/de
Publication of EP3151195A4 publication Critical patent/EP3151195A4/de
Application granted granted Critical
Publication of EP3151195B1 publication Critical patent/EP3151195B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the present invention relates to an optical tracking system and a method for calculating the posture of a marker part of the optical tracking system. More particularly, the present invention relates to an optical tracking system and a method for calculating the posture of a marker part of the optical tracking system by using pattern information.
  • an optical tracking system is used to track the position of a predetermined object.
  • the optical tracking system may be utilized to track a target in real time in equipment, such as a surgical robot.
  • the optical tracking system generally includes a plurality of markers attached to a target and image forming units for forming images by using light emitted by the markers, and mathematically calculates information acquired from the image forming units to thereby obtain position information or the like.
  • the conventional optical tracking system includes a plurality of markers, which increases the size of the equipment, and may be thus inappropriate in the case of tracking that requires fine precision. Therefore, an optical tracking system, which can track the markers accurately and easily while simplifying the markers, is required.
  • an aspect of the present invention is to provide an optical tracking system that can track markers accurately and easily while simplifying the markers.
  • Another aspect of the present invention is to provide a method of calculating the posture of a marker part of an optical tracking system that can be applied to the optical tracking system above.
  • an optical tracking system according to claim 1.
  • An optical tracking system includes a marker part; an image forming part; and a processing part.
  • the marker part includes: a pattern that has particular information; and a first lens that is spaced apart from the pattern and has a first focal length.
  • the image forming part includes: a second lens that has a second focal length; and an image forming unit that is spaced apart from the second lens and on which an image of the pattern is formed by the first lens and the second lens.
  • the processing part determines the posture of the marker part from a coordinate conversion formula between a coordinate on the pattern surface of the pattern and a pixel coordinate on the image of the pattern, and tracks the marker part by using the determined posture of the marker part.
  • the processing part may acquire: a first conversion matrix that converts a first coordinate corresponding to a real coordinate on the pattern surface of the pattern to a second coordinate corresponding to a three-dimensional local coordinate for the first lens of the marker part; and a second conversion matrix that converts a third coordinate corresponding to a three-dimensional local coordinate of the second coordinate for the second lens to a fourth coordinate corresponding to the pixel coordinate on the image of the pattern of the image forming part, wherein the coordinate conversion formula may be defined to convert the first coordinate to the fourth coordinate while containing the first conversion matrix and the second conversion matrix, and the processing part may acquire, from the coordinate conversion formula, a posture definition matrix that defines the posture of the marker part.
  • the processing part may acquire data on the first coordinate and the fourth coordinate from three or more photographed images, and may acquire calibration values of u c , v c , and f b by applying the acquired data to the equation below in order to thereby acquire the first conversion matrix.
  • the processing part may acquire data on the first coordinate and the fourth coordinate from three or more photographed images, and may acquire calibration values of f c , pw, and ph by applying the acquired data to the equation below in order to thereby acquire the second conversion matrix.
  • Another exemplary embodiment as disclosed provides a method for calculating the posture of the marker part of an optical tracking system that includes a marker part which is configured to include a pattern that has particular information and a first lens that is spaced apart from the pattern and has a first focal length, and an image forming part that is configured to include a second lens which has a second focal length and an image forming unit that is spaced apart from the second lens and on which an image of the pattern is formed by the first lens and the second lens, and that calculates the posture of the marker part for tracking the marker part.
  • the method for calculating the posture of the marker part of an optical tracking system may include: acquiring a first conversion matrix that converts a first coordinate corresponding to a real coordinate on the pattern surface of the pattern to a second coordinate corresponding to a three-dimensional local coordinate for the first lens of the marker part and a second conversion matrix that converts a third coordinate corresponding to a three-dimensional local coordinate of the second coordinate for the second lens to a fourth coordinate corresponding to a pixel coordinate on the image of the image forming part; and acquiring a posture definition matrix that defines the posture of the marker part from the coordinate conversion formula that converts the first coordinate to the fourth coordinate while containing the first conversion matrix and the second conversion matrix.
  • the marker part in the optical tracking system for tracking a marker part, can be miniaturized while including the pattern of particular information to enable the tracking, and the posture of the marker part can be determined by modeling the optical system of the marker part and the image forming part with the coordinate conversion formula. Therefore, it is possible to accurately track the marker part by a simpler and easier method.
  • first or second may be used to describe various elements, the elements are not limited to the terms. The terms above will be used only to distinguish one element from other elements. For example, the first element may be named as the second element without departing from the scope of the present invention, and vice versa.
  • FIG. 1 is a conceptual diagram illustrating an optical tracking system, according to an embodiment of the present invention.
  • the optical tracking system 100 includes a marker part 110, an image forming part 120, and a processing part 130.
  • the marker part 110 includes a pattern 112 and a first lens 114.
  • the pattern 112 has particular information.
  • the particular information of the pattern may be recognized by the image forming part 120, which will be described later, for tracking, and may include one-dimensional patterns, such as bar codes, or two-dimensional patterns, such as QR codes.
  • the first lens 114 is spaced apart from the pattern 112, and has a first focal length.
  • the distance between the first lens 114 and the pattern 112 may be the same as the first focal length of the first lens 114 in order for the image forming part 120, which will be described later, to form an image of the pattern 112 and to track the pattern 112 from a distance.
  • a bundle of rays with respect to the pattern 112, which pass through the first lens 114 may be parallel.
  • the first lens 114 for example, may perform a similar function as an object lens of a microscope.
  • the marker part 110 may not include a light source. In this case, the marker part 110 may be utilized as a passive marker that uses light located outside of the marker part 110. On the other hand, the marker part 110 may include a light source. In this case, the marker part 110 may be utilized as an active marker that uses its own light.
  • the image forming part 120 includes a second lens 122 and an image forming unit 124.
  • the second lens 122 has a second focal length.
  • the second lens 122 may perform a similar function as an eyepiece of a microscope.
  • the image forming unit 124 is spaced apart from the second lens 122 and the image of the pattern 112 is formed on the image forming unit 124 by the first lens 114 and the second lens 122.
  • the distance between the image forming unit 124 and the second lens 122 may be the same as the second focal length of the second lens 122 in order to form an image for a bundle of rays with respect to the pattern 112, which pass through the first lens 114 to be parallel.
  • the image forming unit 124 may include an image sensor, such as a CCD (charge coupled device), a CMOS (complementary metal-oxide semiconductor), or the like.
  • the processing part 130 determines the posture of the marker part 110 from a coordinate conversion formula between the coordinate on the pattern surface of the pattern 112 and a pixel coordinate on the image of the pattern 112.
  • the processing part 130 tracks the marker part 110 by using the determined posture of the marker part 110.
  • the processing part 130 may include a computer, or more specifically, may include a central processing unit (CPU).
  • FIG. 2 is a flowchart schematically showing a problem-solving process that is necessary for the processing part of the optical tracking system of FIG. 1 in determining the posture of the marker part.
  • the system modeling is conducted with respect to the optical tracking system 100, which has the configuration as described above (S100).
  • the coordinate conversion formula may be configured by modeling the coordinate conversion according to the optical system of the optical tracking system 100.
  • the modeling of the coordinate conversion according to the optical system of the optical tracking system 100 may be made by each optical system of the marker part 110 and the image forming part 120 and by a relationship therebetween.
  • the first and second conversion matrices which will be described later, are calibrated (S200).
  • the first conversion matrix converts the first coordinate to the second coordinate
  • the second conversion matrix converts the third coordinate to the fourth coordinate
  • the coordinate conversion formula acquired as a result of the system modeling is determined as the equation of various parameters of the optical systems of the marker part 110 and the image forming part 120 shown in FIG. 1 , the parameters may not be accurately acquired or values thereof may vary with the mechanical arrangement state. Therefore, a more accurate system modeling can be made by calibrating the first conversion matrix and the second conversion matrix.
  • the posture refers to the direction in which the marker part 110 faces
  • the posture definition matrix provides information about the posture of the marker part 110 so that roll, pitch, and yaw of the marker part 110 may be recognized from the posture definition matrix.
  • FIG. 3 is a flowchart illustrating a process of system modeling in the problem-solving process of FIG. 2
  • FIG. 4 is a conceptual diagram for explaining the process of system modeling in FIG. 3 .
  • equations for three straight lines are acquired according to optical paths between the marker part 110 and the image forming part 120 (S110).
  • the central point of the first lens 114 is referred to as the first central point A and the central point of the second lens 122 is referred to as the second central point O, while point B refers to a certain point on the pattern 112.
  • a ray with respect to a certain point B passes straight through the first central point A of the first lens 114, and the ray that has passed through the first central point A reaches the second lens 122 at point D. Then, the ray is refracted by the second lens 122 at the point D to then form an image on the image forming unit 124 at point E.
  • a ray passes straight through the first central point A of the first lens 114 and the second central point O of the second lens 122 to then meet the extension line of the line segment DE at point C.
  • the linear equation for the line segment AO (or the line segment AC), the linear equation for the line segment AD, and the linear equation for the line segment DC are defined as L1, L2, and L3, respectively, as shown in FIG. 4 .
  • the coordinate of the first central point A is configured as (X,Y,Z), and the coordinate of the second central point O is configured as the origin (0,0,0). Since the coordinate of the second central point O of the second lens 122 is configured as the origin (0,0,0), the three-dimensional local coordinate system for the second lens 122 is the same as the world coordinate system.
  • the coordinate of a certain point (corresponding to the point B) on the pattern 112 is configured as (u,v), and the coordinate of the central point of the pattern 112 is configured as (u c ,v c ).
  • the coordinate of a pixel of the image (corresponding to the point E) of the pattern 112, which is formed on the image forming unit 124, is configured as (u',v').
  • the coordinates (u,v) and (u c ,v c ), for example, may be configured based on the left upper side of pattern 112, and the coordinate (u',v'), for example, may be configured based on the left upper side of the image of pattern 112.
  • the z-axis coordinate of the image forming unit 124 may be -f c .
  • the equations of the three straight lines may be acquired in sequence by using information above.
  • the equation of the straight line L1 is acquired from the line segment AO, and the position of the point C is acquired from the same.
  • the equation of the straight line L2 is acquired from the line segment AB, and the position of the point D is acquired from the same.
  • the equation of the straight line L3 is acquired from the line segment DC.
  • the posture definition matrix for defining the posture of the marker part 110 is defined as a 3 ⁇ 3 matrix [R] and the components of the matrix [R] are defined as r 11 , r 12 , r 13 , r 21 , r 22 , r 23 , r 31 , r 32 , and r 33 , respectively
  • the world coordinate of the point B may be determined as (r 11 u+r 12 v+r 13 f b +X, r 21 u+r 22 v+r 23 f b +Y, r 31 u+r 32 v+r 33 f b +Z) that is converted from the pattern coordinate (u,v) of the point B based on the matrix [R] and the focal length f b of the first lens 114.
  • the position of the point E (the world coordinate of the point E) may be acquired from the equation of the straight line L3 obtained above so that the pixel coordinate (u',v') of the point E may be obtained from the same.
  • the pixel coordinate (u',v') of the point E may be expressed as the coordinate (u,v) on the pattern of the point B, the relational equation between the pattern 112 and the image of the pattern corresponding to the point E may be determined.
  • (u,v) denotes the first coordinate
  • (u',v') denotes the fourth coordinate
  • [C] refers to the first conversion matrix
  • [A] refers to the second conversion matrix
  • [R] refers to the posture definition matrix
  • (u c ,v c ) denotes the coordinate of the center of the pattern on the pattern surface
  • f b denotes the first focal length
  • f c denotes the second focal length
  • pw denotes the width of a pixel of the image of the pattern
  • ph denotes the height of a pixel of the image of the pattern
  • i of (u i ,v i ) and (u' i ,v' i ) indicates the predetermined i-th pattern.
  • the coordinate conversion formula is made by the product of the first and second conversion matrices, which are described in FIG. 1 , and the posture definition matrix.
  • the coordinate conversion formula is conceptually expressed as [A][R][C], which is the product of the first conversion matrix [C] for converting the first coordinate to the second coordinate, the posture definition matrix [R] for converting the second coordinate to the third coordinate, and the second conversion matrix [A] for converting the third coordinate to the fourth coordinate.
  • the calibration is carried out first with respect to the second conversion matrix, and is then carried out with respect to the first conversion matrix.
  • FIG. 5 is a flowchart illustrating a process of calibrating the second conversion matrix in the problem-solving process of FIG. 2 .
  • a matrix [B] and a matrix [H] are defined to facilitate the mathematical analysis for the calibration (S210).
  • the matrix [B] may be defined by using the second conversion matrix [A] as shown in Equation 2
  • the matrix [H] may be defined by using the first conversion matrix [C], the second conversion matrix [A], and the posture definition matrix [R] as shown in Equation 3.
  • Equation 4 is obtained by multiplying both sides of Equation 3 by A -1 .
  • a ⁇ 1 h 1 h 2 h 3 r 1 r 2 T
  • the matrix [B] may be defined as shown in Equation 5 by using the orthonormality of the posture definition matrix [R] corresponding to a rotation matrix.
  • -f c /pw
  • -f c /ph
  • f c refers to the focal length of the second lens 122 of the image forming part 120
  • Pw and ph refer to the width and the height of a pixel, respectively.
  • Equation 6 Column vectors b and v ij are defined as shown in Equation 6 by using non-zero components of the matrix [B].
  • b B 11 B 22 B 13 B 23 B 33 T
  • v ij h i 1 h j 1 , h i 2 h j 2 , h i 3 h j 1 + h i 1 h j 3 , h i 3 h j 2 , h i 2 h j 3 , h i 3 h j 3 T
  • Equation 7 may be acquired by using the orthonormality of the matrix [R] with respect to Equation 6.
  • values of the matrix [B] are obtained by applying data on three or more images to the matrix [H] (S230).
  • the column vector b may be obtained by using a method such as singular value decomposition (SVD). Once the column vector b is obtained, all components of the matrix [B] may be recognized.
  • SVD singular value decomposition
  • v' c , ⁇ , ⁇ , and u' c may be obtained through Equation 8 ( ⁇ and ⁇ are expressed as parameters).
  • Equation 9 all components of the matrix [A] may be obtained from Equation 9.
  • the calibration of the first conversion matrix [C] is made by using the second conversion matrix [A] that has been previously calibrated.
  • FIG. 6 is a flowchart illustrating a process of calibrating the first conversion matrix in the problem-solving process of FIG. 2 .
  • the matrix [R] is obtained by putting the calibrated matrix [A] in the equation on matrix [H] (S250).
  • Equation 10 is acquired by putting the second conversion matrix [A] of Equation 9 in Equation 3 and by calculating [R][C] of Equation 1.
  • [R] [r1 r2 r3] in Equation 10
  • [R] may be obtained for each column vector component from Equation 11.
  • r 1 A ⁇ 1 h 1
  • r 2 A ⁇ 1 h 2
  • r 3 r 1 ⁇ r 2
  • the product of the matrix [A] and the matrix [R] is defined as the matrix [HK] to then be applied to the coordinate conversion formula of Equation 1 to have the components of the matrix [HK] and the matrix [C].
  • the matrix [HK] may be obtained by using the matrix [A] that is acquired in Equation 9 and the matrix [R] that is acquired in Equation 11, and may be applied to the coordinate conversion formula of Equation 1 in order to thereby acquire Equation 12, which comprises of the components of the matrix [HK] and the matrix [C].
  • s u ′ v ′ 1 A R
  • C u v 1 HK
  • u v 1 HK 1 0 ⁇ u c 0 1 ⁇ v c 0 0 f b u v 1
  • the matrix [AA], the matrix [BB], and the matrix [CC] may be defined as shown in Equation 13 by using the matrix [HK].
  • FIG. 7 is a flowchart illustrating an example of a process for acquiring a posture definition matrix in problem-solving process of FIG. 2 .
  • Equation 14 may be acquired by configuring the same as an equation.
  • the matrix [H] is acquired by using such a method as singular value decomposition (SVD) (S320a).
  • SVD singular value decomposition
  • Equation 15 Using a method such as singular value decomposition (SVD), 2n equations of Equation 15 are acquired.
  • SVD singular value decomposition
  • the posture definition matrix may be obtained by other methods.
  • FIG. 8 is a flowchart illustrating another example of a process for acquiring a posture definition matrix in problem-solving process of FIG. 2 .
  • Equation 14 the equation with respect to each component r 11 , r 12 , r 13 , r 21 , r 22 , r 23 , r 31 , r 32 , or r 33 of the posture definition matrix [R] is made from Equation 14 in order to thereby acquire Equation 16.
  • the matrix [R] is obtained by using a method such as singular value decomposition (SVD) (S330b).
  • SVD singular value decomposition
  • Equation 16 2n equations of Equation 16 are acquired by using a method such as singular value decomposition (SVD).
  • SVD singular value decomposition
  • the posture of the marker part 110 may be calculated by applying the system modeling process and the method for acquiring the posture definition matrix [R] described above to the optical tracking system 100 shown in FIG. 1 .
  • FIG. 9 is a flowchart illustrating a method of calculating the posture of the marker part of the optical tracking system, according to an embodiment of the present invention.
  • the processing part 130 calibrates the first and second conversion matrices from three or more images (S510).
  • the calibration may be substantially the same as operation S200 described in FIG. 2 and operations S210 to S280 described in detail in FIGS. 5 and 6 .
  • the processing part 130 may calibrate the first and second conversion matrices by using only the final equation for the calibration as in operations S230 and S280 among the operations above.
  • the posture definition matrix is acquired from the coordinate conversion formula that contains the first and second conversion matrices (S520).
  • the acquisition of the posture definition matrix may be substantially the same as operation S300 described in FIG. 2 , operations S310 to S330a, and operations S310 to S330b described in detail in FIGS. 7 and 8 .
  • the processing part 130 may acquire the posture definition matrix by using only the final equation for the acquisition of the posture definition matrix as in operation S320a and S320b among the operations above.
  • the processing part 130 may acquire the first conversion matrix for converting the first coordinate to the second coordinate and the second conversion matrix for converting the third coordinate to the fourth coordinate through the calibration in advance, and may acquire the posture definition matrix for defining the posture of the marker part 110 from the coordinate conversion formula.
  • the posture of the marker part 110 may be recognized.
  • the roll, pitch, and yaw of the marker part 110 may be recognized from the posture definition matrix.
  • the marker part can be miniaturized while including a pattern of particular information to enable tracking, and the posture of the marker part can be determined by modeling the optical systems of the marker part and the image forming part with the coordinate conversion formula. Therefore, it is possible to accurately track the marker part by a simpler and easier method.

Claims (7)

  1. Optisches Trackingsystem (100) umfassend:
    ein Markerteil (110), das dazu konfiguriert ist, ein Muster (112), das eine bestimmte Information aufweist, und eine erste Linse (114) zu umfassen, die von dem Muster (112) beabstandet ist und eine erste Fokallänge aufweist; und ein Bildbildungsteil (120), das dazu konfiguriert ist, eine zweite Linse (122), die eine zweite Fokallänge aufweist, und eine Bildbildungseinheit (124) zu umfassen, die von der zweiten Linse (122) beabstandet ist und auf der ein Bild des Musters (112) durch die erste Linse (114) und die zweite Linse (122) gebildet wird,
    wobei das optische Trackingsystem (100) ein Verarbeitungsteil (130) umfasst, das dazu konfiguriert ist:
    eine erste Umwandlungsmatrix zu berechnen, die eine einer Koordinate auf einer Musteroberfläche des Musters (112) entsprechende erste Koordinate in eine einer dreidimensionalen Koordinate für die erste Linse (114) des Markerteils (110) entsprechende zweite Koordinate umwandelt;
    eine zweite Umwandlungsmatrix zu berechnen, die eine einer dreidimensionalen Koordinate der zweiten Koordinate für die zweite Linse (122) entsprechende dritte Koordinate in eine einer Pixelkoordinate auf dem Bild des Musters (112) des Bildbildungsteils (120) entsprechende vierte Koordinate umwandelt;
    eine Ausrichtungsdefinitionsmatrix zu berechnen, die aus einer Koordinatenumwandlungsformel, die dazu definiert ist, die erste Koordinate in die vierte Koordinate umzuwandeln, während sie die erste Umwandlungsmatrix und die zweite Umwandlungsmatrix umfasst, eine Ausrichtung des Markerteils (110) definiert; und
    das Markerteil (110) basierend auf der Ausrichtungsdefinitionsmatrix zu tracken,
    wobei ein Abstand zwischen der ersten Linse (114) und dem Muster (112) gleich der ersten Fokallänge ist, und ein Abstand zwischen der Bildbildungseinheit (124) und der zweiten Linse (122) gleich der zweiten Fokallänge ist,
    wobei die Koordinatenumwandlungsformel durch die folgende Gleichung definiert ist, s u v 1 = A R C u v 1
    Figure imgb0042
    wobei (u,v) die erste Koordinate bezeichnet, (u',v') die vierte Koordinate bezeichnet, [C] die erste Umwandlungsmatrix bezeichnet, [A] die zweite Umwandlungsmatrix bezeichnet, [R] die Ausrichtungsdefinitionsmatrix bezeichnet, und s eine proportionale Konstante bezeichnet,
    dadurch gekennzeichnet, dass das Verarbeitungsteil (130) dazu konfiguriert ist:
    Daten der ersten Koordinate und Daten der vierten Koordinate zu erlangen; und
    die Ausrichtungsdefinitionsmatrix durch die folgende Gleichung zu berechnen, auf die die erlangten Daten angewandt werden, R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33
    Figure imgb0043
    f c pw u 1 f c pw v 1 f c pw f b 0 0 0 u 1 u c u 1 u 1 u c v 1 u 1 u c f b 0 0 0 f c ph u 1 f c ph v 1 f c ph f b u 1 u c u 1 v 1 v c v 1 v 1 v c f b f c pw u n f c pw v n f c pw f b 0 0 0 u n u c u n u n u c v n u n u c f b 0 0 0 f c ph u n f c ph v n f c ph f b v n v c u n v n v c v n v n v c f b r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 = 0 ,
    Figure imgb0044
    und
    wobei (u1,v1),..., (un,vn) Daten der ersten Koordinate bezeichnen, (u'1,v'1),..., (u'n,v'n) Daten der vierten Koordinate bezeichnen, (u'c,v'c) die Pixelkoordinate auf dem Bild des Musters (112) bezeichnet, die einem Zentrum des Musters (112) entspricht, fc die zweite Fokallänge bezeichnet, pw eine Breite eines Pixels des Bildes des Musters (112) bezeichnet, und ph eine Höhe eines Pixels des Bildes des Musters (112) bezeichnet.
  2. Optisches Trackingsystem (100) nach Anspruch 1, wobei die erste Umwandlungsmatrix durch die folgende Gleichung definiert ist, C = 1 0 u c 0 1 v c 0 0 f b ,
    Figure imgb0045
    und
    wobei (uc,Vc) eine Koordinate eines Zentrums des Musters (112) bezeichnet und fb die erste Fokallänge bezeichnet.
  3. Optisches Trackingsystem (100) nach Anspruch 2, wobei das Verarbeitungsteil (130) dazu konfiguriert ist, durch Erlangen von Kalibrierwerten von uc, vc und fb aus drei oder mehr fotografierten Bildern die erste Umwandlungsmatrix zu berechnen.
  4. Optisches Trackingsystem (100) nach Anspruch 1, wobei die zweite Umwandlungsmatrix durch die folgende Gleichung definiert ist, A = f c pw 0 u c 0 f c ph v c 0 0 1 ,
    Figure imgb0046
    und
    wobei (u'c,v'c) die Pixelkoordinate auf dem Bild des Musters (112) bezeichnet, die einem Zentrum des Musters (112) entspricht, fc die zweite Fokallänge bezeichnet, pw eine Breite eines Pixels des Bildes des Musters (112) bezeichnet, und ph eine Höhe eines Pixels des Bildes des Musters (112) bezeichnet.
  5. Optisches Trackingsystem (100) nach Anspruch 4, wobei das Verarbeitungsteil (130) dazu konfiguriert ist, durch Erlangen von Kalibrierwerten von fc, pw und ph aus drei oder mehr fotografierten Bilden die zweite Umwandlungsmatrix zu berechnen.
  6. Verfahren zum Berechnen, durch ein optisches Trackingsystem (100), einer Ausrichtung eines Markerteils (110) des optischen Trackingsystems (100), wobei das optische Trackingsystem (100) das Markerteil (110), das dazu konfiguriert ist, ein Muster (112), das eine bestimmte Information aufweist, und eine erste Linse (114) zu umfassen, die von dem Muster (112) beabstandet ist und eine erste Fokallänge aufweist, und ein Bildbildungsteil (120) umfasst, das dazu konfiguriert ist, eine zweite Linse (122), die eine zweite Fokallänge aufweist, und eine Bildbildungseinheit (124) zu umfassen, die von der zweiten Linse (122) beabstandet ist und auf der ein Bild des Musters (112) durch die erste Linse (114) und die zweite Linse (122) gebildet wird,
    wobei das Verfahren ferner umfasst:
    Berechnen einer ersten Umwandlungsmatrix, die eine einer Koordinate auf der Musteroberfläche des Musters (112) entsprechende erste Koordinate in eine einer dreidimensionalen Koordinate für die erste Linse (114) des Markerteils (110) entsprechende zweite Koordinate umwandelt;
    Berechnen einer zweiten Umwandlungsmatrix, die eine einer dreidimensionalen Koordinate der zweiten Koordinate für die zweite Linse (122) entsprechende dritte Koordinate in eine einer Pixelkoordinate auf dem Bild des Musters (112) des Bildbildungsteils (120) entsprechende vierte Koordinate umwandelt; und
    Berechnen einer Ausrichtungsdefinitionsmatrix, die aus der Koordinatenumwandlungsformel, die die erste Koordinate in die vierte Koordinate umwandelt, während sie die erste Umwandlungsmatrix und die zweite Umwandlungsmatrix umfasst, die Ausrichtung des Markerteils (110) definiert,
    wobei ein Abstand zwischen der ersten Linse (114) und dem Muster (112) gleich der ersten Fokallänge ist und ein Abstand zwischen der Bildbildungseinheit (124) und der zweiten Linse (122) gleich der zweiten Fokallänge ist,
    wobei die Koordinatenumwandlungsformel durch die folgende Gleichung definiert ist, s u v 1 = A R C u v 1
    Figure imgb0047
    wobei (u,v) die erste Koordinate bezeichnet, (u',v') die vierte Koordinate bezeichnet, [C] die erste Umwandlungsmatrix bezeichnet, [A] die zweite Umwandlungsmatrix bezeichnet, [R] die Ausrichtungsdefinitionsmatrix bezeichnet, und s eine proportionale Konstante bezeichnet, und
    das Verfahren dadurch gekennzeichnet ist, dass es ferner umfasst:
    Erlangen von Daten der ersten Koordinate und Daten der vierten Koordinate; und
    Berechnen der Ausrichtungsdefinitionsmatrix durch die folgende Gleichung, auf die die erlangten Daten angewandt werden, R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33
    Figure imgb0048
    f c pw u 1 f c pw v 1 f c pw f b 0 0 0 u 1 u c u 1 u 1 u c v 1 u 1 u c f b 0 0 0 f c ph u 1 f c ph v 1 f c ph f b u 1 u c u 1 v 1 v c v 1 v 1 v c f b f c pw u n f c pw v n f c pw f b 0 0 0 u n u c u n u n u c v n u n u c f b 0 0 0 f c ph u n f c ph v n f c ph f b v n v c u n v n v c v n v n v c f b r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 = 0 ,
    Figure imgb0049
    wobei (u1,v1), ..., (un,vn) Daten der ersten Koordinate bezeichnen, (u'1,v'1),..., (u'n,v'n) Daten der vierten Koordinate bezeichnen, (u'c,v'c) die Pixelkoordinate auf den Bild des Musters (112) bezeichnet, die einem Zentrum des Musters (112) entspricht, fc die zweite Fokallänge bezeichnet, pw eine Breite eines Pixels des Bildes des Musters (112) bezeichnet, und ph eine Höhe eines Pixels des Bildes des Musters (112) bezeichnet.
  7. Verfahren nach Anspruch 6, wobei die erste Umwandlungsmatrix durch die folgende Gleichung definiert ist, C = 1 0 u c 0 1 v c 0 0 f b ,
    Figure imgb0050
    wobei (uc,vc) die Koordinate eines Zentrums des Musters (112) bezeichnet und fb die erste Fokallänge bezeichnet,
    wobei die zweite Umwandlungsmatrix durch die folgende Gleichung definiert ist, A = f c pw 0 u c 0 f c ph v c 0 0 1 ,
    Figure imgb0051
    und
    wobei (u'c,v'c) die Pixelkoordinate auf dem Bild des Musters (112) bezeichnet, die dem Zentrum des Musters (112) entspricht, fc die zweite Fokallänge bezeichnet, pw eine Breite eines Pixels des Bildes des Musters (112) bezeichnet, und ph eine Höhe eines Pixels des Bildes des Musters (112) bezeichnet.
EP15799209.0A 2014-05-29 2015-05-29 Optisches verfolgungssystem und verfahren zur berechnung der position eines markerteils in einem optischen verfolgungssystem Active EP3151195B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020140065168A KR101615086B1 (ko) 2014-05-29 2014-05-29 옵티컬 트래킹 시스템 및 옵티컬 트래킹 시스템의 마커부 자세 산출방법
PCT/KR2015/005442 WO2015183049A1 (ko) 2014-05-29 2015-05-29 옵티컬 트래킹 시스템 및 옵티컬 트래킹 시스템의 마커부 자세 산출방법

Publications (3)

Publication Number Publication Date
EP3151195A1 EP3151195A1 (de) 2017-04-05
EP3151195A4 EP3151195A4 (de) 2017-11-22
EP3151195B1 true EP3151195B1 (de) 2019-08-14

Family

ID=54699307

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15799209.0A Active EP3151195B1 (de) 2014-05-29 2015-05-29 Optisches verfolgungssystem und verfahren zur berechnung der position eines markerteils in einem optischen verfolgungssystem

Country Status (6)

Country Link
US (2) US10229506B2 (de)
EP (1) EP3151195B1 (de)
JP (1) JP6370478B2 (de)
KR (1) KR101615086B1 (de)
CN (1) CN106462977B (de)
WO (1) WO2015183049A1 (de)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101820682B1 (ko) 2016-08-09 2018-01-23 주식회사 고영테크놀러지 옵티컬 트래킹용 마커, 옵티컬 트래킹 시스템 및 옵티컬 트래킹 방법
CN107194968B (zh) * 2017-05-18 2024-01-16 腾讯科技(上海)有限公司 图像的识别跟踪方法、装置、智能终端和可读存储介质
KR102166372B1 (ko) * 2017-12-20 2020-10-15 주식회사 고영테크놀러지 옵티컬 트래킹 시스템 및 옵티컬 트래킹 방법
KR102102291B1 (ko) 2017-12-20 2020-04-21 주식회사 고영테크놀러지 옵티컬 트래킹 시스템 및 옵티컬 트래킹 방법
CN109520419A (zh) * 2018-12-06 2019-03-26 闻泰通讯股份有限公司 通过图像测量物体尺寸的方法、装置及移动终端
CN114486292B (zh) * 2022-04-18 2022-07-12 中国汽车技术研究中心有限公司 碰撞测试中假人运动响应的测量方法、设备和存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007090448A (ja) 2005-09-27 2007-04-12 Honda Motor Co Ltd 二次元コード検出装置及びそのプログラム、並びに、ロボット制御情報生成装置及びロボット
JP4926817B2 (ja) * 2006-08-11 2012-05-09 キヤノン株式会社 指標配置情報計測装置および方法
CN100429478C (zh) 2007-01-15 2008-10-29 哈尔滨工业大学 基于微透镜阵列的激光光束发散角测试方法
CN100483184C (zh) * 2007-05-29 2009-04-29 东南大学 可变焦透镜三维显示器
KR100971667B1 (ko) 2008-12-11 2010-07-22 인천대학교 산학협력단 증강 책을 통한 실감 콘텐츠를 제공하는 방법 및 장치
US8366003B2 (en) * 2009-07-23 2013-02-05 Massachusetts Institute Of Technology Methods and apparatus for bokeh codes
JP5423406B2 (ja) 2010-01-08 2014-02-19 ソニー株式会社 情報処理装置、情報処理システム及び情報処理方法
KR101262181B1 (ko) 2010-05-03 2013-05-14 한국과학기술원 수족관 내의 로봇 물고기 위치 탐지 방법 및 장치
US9832441B2 (en) * 2010-07-13 2017-11-28 Sony Interactive Entertainment Inc. Supplemental content on a mobile device
JP5453352B2 (ja) 2011-06-30 2014-03-26 株式会社東芝 映像表示装置、映像表示方法およびプログラム
US9478030B1 (en) * 2014-03-19 2016-10-25 Amazon Technologies, Inc. Automatic visual fact extraction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
KR20150138500A (ko) 2015-12-10
WO2015183049A1 (ko) 2015-12-03
KR101615086B1 (ko) 2016-04-27
JP2017522674A (ja) 2017-08-10
CN106462977B (zh) 2019-11-12
EP3151195A1 (de) 2017-04-05
US20190073777A1 (en) 2019-03-07
EP3151195A4 (de) 2017-11-22
US20170193670A1 (en) 2017-07-06
JP6370478B2 (ja) 2018-08-08
US10229506B2 (en) 2019-03-12
US10388024B2 (en) 2019-08-20
CN106462977A (zh) 2017-02-22

Similar Documents

Publication Publication Date Title
EP3151195B1 (de) Optisches verfolgungssystem und verfahren zur berechnung der position eines markerteils in einem optischen verfolgungssystem
US10109066B2 (en) Optical tracking system, and method for calculating posture and location of marker part in optical tracking system
US20210041236A1 (en) Method and system for calibration of structural parameters and construction of affine coordinate system of vision measurement system
KR101857472B1 (ko) 카메라 보정 방법 및 이에 대한 시스템
CN102607526B (zh) 双介质下基于双目视觉的目标姿态测量方法
JP5082941B2 (ja) 標線位置測定装置、標線位置測定用プログラム、および標線マーク
CN104827480A (zh) 机器人系统的自动标定方法
CN110969665A (zh) 一种外参标定方法、装置、系统及机器人
CN105955260B (zh) 移动机器人位置感知方法和装置
Schmidt et al. Automatic work objects calibration via a global–local camera system
CN102096918B (zh) 一种交会对接用相机内参数的标定方法
Jiang et al. Calibration and uncertainty analysis of a combined tracking-based vision measurement system using Monte Carlo simulation
US20210270611A1 (en) Navigation apparatus, navigation parameter calculation method, and medium
Perez-Yus et al. A novel hybrid camera system with depth and fisheye cameras
JP2007034964A (ja) カメラ視点運動並びに3次元情報の復元及びレンズ歪パラメータの推定方法、装置、カメラ視点運動並びに3次元情報の復元及びレンズ歪パラメータの推定プログラム
Fuchs et al. 3D pose detection for articulated vehicles
Dionnet et al. Robust stereo tracking for space applications
Martinez et al. Is linear camera space manipulation impervious to systematic distortions?
Wang et al. Vision based robotic grasping with a hybrid camera configuration
Winkens et al. Optical truck tracking for autonomous platooning
Hou et al. Automatic calibration method based on traditional camera calibration approach
JP2009053080A (ja) 3次元位置情報復元装置およびその方法
KR20170123390A (ko) 3차원 정렬 오차 측정용 입체형 캘리브레이터와, 이를 이용한 3차원 정렬 오차 산출 방법
Hurwitz et al. Evaluation of 7-degree-of-freedom robotic arm precision and effects of calibration methods
CN114516051A (zh) 三个及以上自由度机器人视觉测量的前方交会方法及系统

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20161222

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20171023

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/70 20170101ALI20171017BHEP

Ipc: G06T 7/20 20170101AFI20171017BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180829

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20190222

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1167917

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190815

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015035932

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190814

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191114

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191216

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191114

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1167917

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190814

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191214

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191115

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200224

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015035932

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG2D Information on lapse in contracting state deleted

Ref country code: IS

26N No opposition filed

Effective date: 20200603

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200531

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20200529

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200529

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200529

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200529

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190814

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230404

Year of fee payment: 9