CN103914128B - Wear-type electronic equipment and input method - Google Patents

Wear-type electronic equipment and input method Download PDF

Info

Publication number
CN103914128B
CN103914128B CN201210593624.XA CN201210593624A CN103914128B CN 103914128 B CN103914128 B CN 103914128B CN 201210593624 A CN201210593624 A CN 201210593624A CN 103914128 B CN103914128 B CN 103914128B
Authority
CN
China
Prior art keywords
operating body
image
character
wear
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210593624.XA
Other languages
Chinese (zh)
Other versions
CN103914128A (en
Inventor
刘俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210593624.XA priority Critical patent/CN103914128B/en
Publication of CN103914128A publication Critical patent/CN103914128A/en
Application granted granted Critical
Publication of CN103914128B publication Critical patent/CN103914128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiments of the invention provide a kind of wear-type electronic equipment and input method.Wear-type electronic equipment according to embodiments of the present invention, including:Fixed cell, wear-type electronic equipment can be worn on the head of user by the fixed cell;Graphics processing unit, configure to obtain the first image to be shown;First image is sent to display unit by image transmitting unit, configuration;Display unit, configuration carry out the first image that display image processing unit obtains;Input block, it is arranged on fixed cell and/or display unit, configures to detect first position of the operating body relative to wear-type electronic equipment;Graphics processing unit is further configured to generate the second image according to first position, and according to the second image and the first image, generates the 3rd image;And display unit is further configured to show the 3rd image.

Description

Wear-type electronic equipment and input method
Technical field
Embodiments of the invention are related to a kind of wear-type electronic equipment and the input method applied to wear-type electronic equipment.
Background technology
With the development of the communication technology, various portable electric appts are widely used, for example, tablet computer, intelligence Energy mobile phone, game machine and portable media player etc..However, user is led to when using current portable electric appts Often need to hold with a hand electronic equipment, and keep specific posture so that electronic equipment is operated or watched to electronic equipment Displayed content.This causes user when to electronic device, it is difficult to carries out other actions, and operates a period of time Afterwards, fatigue is easily felt at the position such as the hand of user, shoulder, neck.
In order to change the operation posture of user and bring more preferable usage experience to user, it has been proposed that for example with The function wear-type electronic equipment of communication function, image display function, audio playing function etc.Worn yet with user During wear-type electronic equipment, the equipment can not be seen, be not easy to carry out complicated input operation by the equipment.Therefore it is big at present Simple control button is provided only with most wear-type electronic equipments.Although can by wear-type electronic equipment and such as mouse, The external input device of keyboard etc is connected to realize complicated input, but user does not carry external input device, and Also it is difficult to use when user moves.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of wear-type electronic equipment and input method, to solve above-mentioned ask Topic.
An embodiment provides a kind of wear-type electronic equipment, including:Fixed cell, wear-type electronics are set The standby head that user can be worn on by the fixed cell;Graphics processing unit, set in a fixation unit, configure to obtain The first image that must be shown;First image is sent to display unit by image transmitting unit, configuration;Display unit, configuration Carry out the first image of display image processing unit acquisition, wherein display unit is connected with fixed cell, when display device passes through Fixed cell is worn by the user at head, and at least Part I in display unit is located in the viewing area of user and direction User;Input block, it is arranged on fixed cell and/or display unit, configures to detect operating body relative to wear-type electronics The first position of equipment;Graphics processing unit is further configured to generate the second image according to first position, and according to the second image With the first image, the 3rd image is generated;And display unit is further configured to show the 3rd image.
Another embodiment of the present invention additionally provides a kind of input method, applied to wear-type electronic equipment, input method Including:Show the first image;Detect first position of the operating body relative to wear-type electronic equipment;According to first position generation the Two images, and according to the second image and the first image, generate the 3rd image;And the 3rd image of display.
Wear-type electronic equipment according to embodiments of the present invention and input method, user are wearing wear-type electronic equipment When, it is not necessary to for example, by the external input device of mouse, keyboard etc, can be achieved in itself using only wear-type electronic equipment Complicated input, so as to break away from constraint of the external input device for user, and it is easy to user to carry and in mobile feelings Used under condition.
Brief description of the drawings
Retouched in order to illustrate the technical solution of the embodiments of the present invention more clearly, embodiments of the invention will be briefly described The required accompanying drawing used in stating.
Fig. 1 is the block diagram for showing wear-type electronic equipment according to an embodiment of the invention.
Fig. 2 is the explanation for showing the signal situation inputted by the input block of the another example according to the present invention Figure.
Fig. 3 is the block diagram for showing the display unit according to an example of the present invention.
Fig. 4 is the explanation figure for a signal situation for showing the display unit shown in Fig. 3.
Fig. 5 is the explanation figure for an illustrative case for showing the wear-type electronic equipment shown in Fig. 1.
Fig. 6 is the explanation figure for another illustrative case for showing the electronic equipment shown in Fig. 1.
Fig. 7 depicts the flow chart of input method 700 according to embodiments of the present invention.
Fig. 8 a to Fig. 8 c are to show that the mapping by being used to indicate mapping position according to the generation of the present invention identifies, and Generate the explanation figure of a signal situation of the second image of the second target character corresponding with mapping mark.
Fig. 9 a to Fig. 9 e are to show that the mapping by being used to indicate mapping position according to the generation of the present invention identifies, and Generate the explanation figure of another signal situation of the second image of the second target character corresponding with mapping mark.
Figure 10 a to Figure 10 c are to show that the mapping by being used to indicate mapping position according to the generation of the present invention identifies, and And the explanation figure of a signal situation of the second image of generation the second target character corresponding with mapping mark.
Embodiment
Hereinafter, by preferred embodiments of the present invention will be described in detail with reference to the annexed drawings.Pay attention to, in the specification and drawings In, there is substantially the same step and element to be denoted by the same reference numerals, and to these steps and the repetition solution of element Releasing to be omitted.
Illustrate the wear-type electronic equipment of one embodiment of the present of invention next, with reference to Fig. 1.Fig. 1 is shown according to this The block diagram of the wear-type electronic equipment 100 of invention one embodiment.As shown in figure 1, wear-type electronic equipment 100 includes Fixed cell 110, graphics processing unit 120, image transmitting unit 130, display unit 140 and input block 150.
Wear-type electronic equipment 100 can be worn on the head of user by fixed cell 110.For example, fixed cell 110 can include the wearable components such as the helmet, headband.Alternatively, fixed cell 110 can also include may be supported on user Ear on support arm.
Graphics processing unit 120 is set in a fixation unit.Graphics processing unit 120 can obtain the first figure to be shown Picture.For example, image file can be prestored in wear-type electronic equipment 100.Obtaining the first image 120 to be shown can obtain The image file stored, and play operation is performed to this document, to export the first image.In another example wear-type electronic equipment 100 may also include transmission/reception unit, to receive the image file sent from another electronic equipment.Obtain to be shown first Image 120 can obtain received image file, and perform play operation to this document, to export the first image.
First image can be sent to display unit 140 by image transmitting unit 130.And display unit 140 can show figure The first image obtained as processing unit.Display unit 140 is connected with fixed cell 110, when display device passes through fixed list Member is worn by the user at head, and at least Part I in display unit 140 is located in the viewing area of user and towards use Family.
Input block 150 may be provided on fixed cell and/or display unit, and detect operating body relative to wear-type The first position of electronic equipment.Graphics processing unit 120 can generate the second figure according to the first position that input block 150 is detected Picture, and according to the second image and the first image, generate the 3rd image.Then, the displayable image processing unit of display unit 140 120 the 3rd images generated.
According to the example of the present invention, input block 150 may include induction panel.Induction panel can detect operating body, And obtain the second place of the operating body on the induction panel, using as operating body relative to wear-type electronic equipment 100 First position.
For example, the second place can be when operating body contacts with induction panel, position of the operating body on induction panel. In the case, display image processing unit 120 can be according to when operating body contacts with induction panel, and operating body is in induction panel On position generate the second image.
Selectively, wear-type electronic equipment 100 may also include the first instruction generation unit.First instruction generation unit can The induction panel of testing result according to to(for) operating body, determines whether contact of the operating body with induction panel meets the first generation bar Part, and when it is determined that the first formation condition is satisfied, generate the first control instruction.Graphics processing unit may be in response to the first control System instruction, according to when operating body contacts with induction panel, position of the operating body on induction panel, generate the second image.Example Such as, the first formation condition can be very first time threshold value.When the testing result according to induction panel for operating body, operating body with When the time of contact of an ad-hoc location on induction panel is more than very first time threshold value, the first instruction generation unit can determine that satisfaction First formation condition, and generate the first control instruction.Graphics processing unit may be in response to the first control instruction, according to current behaviour Make the position that body contacts on induction panel with induction panel, generate the second image.
In another example the second place may include when the distance between operating body and induction panel are less than or equal to preset distance When, the position being projected on induction panel of operating body.Further, optionally, the second place is also included when operating body and sensing When the distance between panel is zero, position of the operating body on induction panel.In the case, display image processing unit 120 Can be according to when the distance between operating body and induction panel be less than or equal to preset distance, operating body is projected in induction panel On position and/or when the distance between operating body and induction panel are zero, operating body on induction panel position generation Second image.
Selectively, wear-type electronic equipment 100 may also include the second instruction generation unit.Second instruction generation unit can The induction panel of testing result according to to(for) operating body, it is determined that being less than or equal to preset distance with the distance between induction panel Whether operating body meets the second formation condition, and when it is determined that the second formation condition is satisfied, generates the second control instruction.Figure As processing unit may be in response to the second control instruction, according to when the distance between operating body and induction panel be less than or equal to it is predetermined Apart from when, the position being projected on induction panel of operating body and/or when the distance between operating body and induction panel are zero, Position of the operating body on induction panel, generate the second image.For example, include in the second place when operating body and induction panel it Between distance when being less than or equal to preset distance, in the case of the position being projected on induction panel of operating body, the second generation Condition can be the first distance threshold between operating body and induction panel, wherein the first distance threshold may be less than or equal to it is predetermined Distance.When the testing result according to induction panel for operating body, the distance between operating body and induction panel are gradually decreased as During the first distance threshold, the second instruction generation unit, which can determine that, meets the second formation condition, and generates the second control instruction.Figure As processing unit may be in response to the second control instruction, according to the position being projected on induction panel of current operation body, generation the Two images.In another example include in the second place when the distance between operating body and induction panel is less than or equal to preset distance The position being projected on induction panel of operating body and operating body exists when the distance between operating body and induction panel are zero In the case that position on induction panel generates the second image, the second formation condition can be between operating body and induction panel Distance is gradually reduced, and when the distance between operating body and induction panel are less than or equal to preset distance operating body projection In the position on induction panel and position of the operating body on induction panel when the distance between operating body and induction panel is zero The distance between put difference and be less than or equal to second distance threshold value.When the testing result according to induction panel for operating body, operation The distance between body and induction panel are gradually reduced, and the distance between operating body and induction panel be less than or equal to it is predetermined Apart from when the position being projected on induction panel of operating body and the distance between operating body and induction panel be zero when operate When distance difference of the body between the position on induction panel is less than or equal to second distance threshold value, the second instruction generation unit can be true Surely meet the second formation condition, and generate the second control instruction.Graphics processing unit may be in response to the second control instruction, according to Position of the operating body on induction panel when the distance between operating body and induction panel are zero, generate the second image.
According to another example of the present invention, input block 150 may include image capture module and picture recognition module.Specifically Ground, image capture module can carry out IMAQ to the spatial control region domain of wear-type electronic equipment 100, and obtain collection knot Fruit.The collection result that picture recognition module obtains according to image acquisition units, determine of operating body in spatial control region domain Three positions, using the first position as operating body relative to wear-type electronic equipment 100.When wear-type electronic equipment 100 is worn When being worn over the head of user, graphics processing unit that user can watch the display of display unit 140 in the first direction obtain the One image, and collecting unit 150 are acquired to spatial control region domain in a second direction, wherein first direction and second direction Between angle in predetermined angular range.For example, as user using the operating body of such as finger etc in spatial control region domain In when being controlled operation, second direction of the operating body relative to user's head and the first image shown by display unit 140 It is not parallel relative to the first direction of user, so as to which user need not praise operating body its viewing area to carry out for head Wear the operation of formula electronic equipment, that is to say, that when operating in spatial control region domain, user may can't see operating body.
Selectively, wear-type electronic equipment 100 may also include the 3rd instruction generation unit.3rd instruction generation unit can According to collection result, it is determined that whether the operating body in spatial control region domain meets the 3rd formation condition, and when determination the 3rd When formation condition is satisfied, the 3rd control instruction is generated, graphics processing unit generates the second image according to the 3rd control instruction.Example Such as, the 3rd formation condition can be the second time threshold.When the collection knot that picture recognition module obtains according to image acquisition units Fruit, when determining that the time that is kept of the operating body on the ad-hoc location in spatial control region domain is more than the second time threshold, 3rd instruction generation unit, which can determine that, meets the 3rd formation condition, and generates the 3rd control instruction.Graphics processing unit 120 can In response to the 3rd control instruction, according to current operation body in the position in spatial control region domain, the second image of generation.
In addition, according to the another example of the present invention, input block 150 may include multiple input areas.Input block 150 can Operating body is detected, to determine target input region corresponding with operating body in multiple input areas, and to operation Operation of the body in target input region is detected, to obtain first according to operation of the operating body in target input region Put.
For example, input block 150 can be touch input unit, and including multiple tactic touch input areas Domain.When operating body contacts with touch input unit, input block 150 can be determined in multiple touch input areas operating body with Target touch input area when touch input unit contacts belonging to its touch location, and it is defeated in target touch to operating body The operation entered in region is further detected, to be for example maintained at a spy in target touch input area according to operating body Position the operation acquisition first that the long-time put is touched, is moved to the left, moving right, moving forward, being moved rearwards etc Put.
Fig. 2 is the explanation for showing the signal situation inputted by the input block of the another example according to the present invention Figure.In the illustrated example shown in fig. 2, input block 150 is touch input unit, and including three tactic touch inputs Region 210,220 and 230.As shown in Fig. 2 touch input area 210,220 and 230 corresponds respectively to character zone 240,250 With 260.The character included in character zone 240,250 and 260 is different.Each character zone is included positioned at character zone First character at center, and multiple second characters set around the first character.As shown in Fig. 2 the first of character zone 240 Character is S, and the first character of character zone 250 is G, and the first character of character zone 260 is K.As the user as operating body Finger when contacting with touch input area 220 and being moved right in touch input area 220 along the direction shown in arrow, Input block 150 determines that touch input area 220 is target touch input area in touch input area 210,220 and 230, And obtain operating body and move right to first position a along the direction shown in arrow in target touch input area 220.Figure As processing unit can determine and target touch input area according to target input region 220 in character zone 240,250 and 260 Character zone 250 corresponding to 220 is target character region, according to the motion track of operating body and first position a in target character Determine that the second character H on the right side of the first character G is first object character in region, and generate on first object character H The second image.According to this example, because the number of input area is less, and projection etc. can be set to tie in input area Structure so that user can also be easier to recognize each touch area in the case of it can't see, and rapidly be inputted.
It should be noted that in the illustrated example shown in fig. 2, be described so that input block 150 is touch input unit as an example, but Be the invention is not restricted to this, such as described above, input block 150 can also be do not need operating body to be in contact with it close to sense Answer unit or image acquisition units etc..In addition, the number of input area is not limited to 3, input block 150 can also include 2 Individual touch area or more than 4 input areas.And can be when being worn on the head of user when wear-type electronic equipment 100 Fixed cell or display unit, close to the part in the left side of user's head and right side, input area is set respectively.
Wear-type electronic equipment according to embodiments of the present invention, user is when wearing wear-type electronic equipment, it is not necessary to logical The external input device of such as mouse, keyboard etc is crossed, complicated input can be achieved in itself using only wear-type electronic equipment, So as to break away from constraint of the external input device for user, and it is easy to user to carry and use in the case of movement.
In addition, according to the example of the present invention, the graphics processing unit 120 of wear-type electronic equipment 100 can also be in life Into before the second image, mapping position of the first position relative to the first image is obtained, and when determining mapping position positioned at the When in one image, the mapping for indicating mapping position is generated in the first image and is identified, to help user to include by viewing Map the first image of mark and determine the position of operating body, avoid the input of mistake.
For example, it may include the keyboard area for including multiple 3rd characters in the first image.Graphics processing unit 120 can determine that Mapping position according to the first position that input block obtains relative to the first image, and when determination mapping position is located at first When in image, the mapping for indicating mapping position is generated in the first image and is identified.User can pass through shown mapping mark Knowledge knows the 3rd character in the keyboard area corresponding to current first position, so as to which user can identify according to shown mapping Operating body is adjusted relative to the first position of wear-type electronic equipment in order to selecting its desired character in keyboard area.
Specifically, after the mapping mark for indicating mapping position is generated in the first image, input block 150 may be used also Receive the character entering function from operating body(It is such as above-mentioned for generating the defeated of the first instruction, the second instruction and the 3rd instruction Enter operation), and graphics processing unit can determine corresponding with mapping mark according to character entering function in multiple 3rd characters The second target character, and generate the second image on the second target character.
According to another example of the present invention, the image transmitting unit 130 in Fig. 1 may include to be arranged on the fixed cell Data line in 110.Described first image can be sent to display unit 140 by data line.Fig. 3 is to show basis The block diagram of the display unit of an example of the present invention.As shown in figure 3, display unit 300 may include the first display module 310th, the first optical system 320, the first light guide member 330, the second light guide member 340, frame member 350 and lens component 360. Fig. 4 is the explanation figure for a signal situation for showing the display unit 300 shown in Fig. 3.
First display module 310 may be provided in frame member 350, and be connected with first data transmission line.First Display module 310 can show the first image according to the first vision signal that first data transmission line transmits.According to the one of the present invention Individual example, the first display module 310 can be the display modules of the less miniature display screen of size.
First optical system 320 may also be arranged in frame member 350.First optical system 320 can receive aobvious from first Show the light that module is sent, and the light to being sent from the first display module carries out light path converting, to form the first amplification virtual image. That is, the first optical system 320 has positive refractive power.So as to which user can understand the first image of viewing, and user is seen The size for the image seen is not limited by the size of display unit.For example, optical system may include with convex lens.Alternatively, In order to reduce aberration, avoid dispersion etc. from being disturbed to caused by imaging, the more preferable visual experience of user is brought, optical system also can be by Multiple lens forming lens subassemblies comprising convex lens and concavees lens.
As shown in figure 4, the light sent from the first display module 310 is received in the first optical system 320, and to aobvious from first After showing the light progress light path converting that module 310 is sent, the first light guide member 330 can pass the light Jing Guo the first optical system It is sent to the second light guide member 340.Second light guide member 340 may be provided in lens component 360.And the second light guide member can connect Receive by the first light guide member 330 transmit light, and by the light that the first light guide member 330 transmits to wear wear-type electronics The eyes of the user of equipment are reflected.
Fig. 5 is the explanation figure for an illustrative case for showing the wear-type electronic equipment shown in Fig. 1.Wear-type electronics Equipment 500 is glasses type electronic equipment.Wear-type electronic equipment 500 includes and the fixed cell in wear-type electronic equipment 100 110th, graphics processing unit 120, image transmitting unit 130, display unit 140 fixed cell similar with input block 150, figure As processing unit, image transmitting unit and input block, and the display described with electronic equipment 100 and combination Fig. 3 Display unit as unit class, therefore will not be repeated here.
As shown in figure 5, the fixed cell of wear-type electronic equipment 500 includes the first support arm 510, the second support arm 520 With the 3rd holding part(It is not shown).Specifically, the first support arm 510 includes the first pontes and the first holding part(Such as Shown in dash area in first support arm 510), wherein the first pontes configuration connects frame member and the first maintaining part Point.Second support arm 520 includes second connecting portion point and the second holding part(Such as the dash area institute in the first support arm 520 Show), wherein second connecting portion distribution put to connect frame member and the second holding part.With above in association with describing described in Fig. 3 Similarly, the display unit of wear-type electronic equipment 500 includes frame member 530 and is connected with frame member 530 display unit The lens component 540 connect.3rd holding part of fixed cell is arranged on frame member 530.Specifically, the 3rd holding part Frame member 530 be may be provided on the position between two lens components.Pass through the first holding part, the second holding part With the 3rd holding part, wear-type electronic equipment is maintained at the head of user.Specifically, the first holding part and second is kept Part can be used for the ear that the first support arm 510 and the second support arm 520 are supported on to user, and the 3rd holding part can be used for Frame member 530 is supported at the bridge of the nose of user.
The input block of wear-type electronic equipment 500 may be provided at the first support arm 510, the second support arm 520 and/or mirror In frame part 530.For example, when input block includes induction panel, input block may be provided at the first support arm 510 and/or On second support arm 520, in order to user carry out for example touch or close to etc input operation.And when input block includes figure During as acquisition module, the image capture module in input block may be provided on frame member 530, in order to being controlled in space Operating body in region is acquired.
In addition, as shown in figure 5, according to the example of the present invention, frame member 530 may include to be connected with the first support arm The the first pile crown part 531 connect and the first pile crown part 532 being connected with the second support arm(As shown in Fig. 5 circle Frame member part).The first display module and the first optical system that display unit includes may be provided at the first pile crown portion Points 531 and/or first in pile crown part 532.
In addition, according to another example of the present invention, the wear-type electronic equipment shown in Fig. 5 may also include audio frequency process list Member and osteoacusis unit.Audio treatment unit can carry out audio frequency process and export the first audio signal.For example, can be in wear-type electricity Audio file is prestored in sub- equipment 500.Audio treatment unit can obtain stored audio file, and this document is performed Play operation, to export the first audio signal.In another example wear-type electronic equipment 500 may also include transmission/reception unit, with Receive the audio file sent from another electronic equipment.Audio treatment unit can obtain received audio file, and to this article Part performs play operation, to export the first audio signal.Preferably, audio treatment unit is arranged on the of the first support arm 510 In one holding part and/or the first holding part of the second support arm 520.Moreover it is preferred that osteoacusis unit may be provided at The inner side of the second connecting portion of the inner side of the first pontes of one support arm 510 and/or the second support arm 520 point, and root Vibration is produced according to the first audio signal.In the example of the present invention, the inner side of the first pontes and second connecting portion point is to work as When wear-type electronic equipment is worn on the head of user, the head of the close user of the first pontes and second connecting portion point Side.Osteoacusis unit can produce vibration according to the audio signal from audio treatment unit so that user, which can pass through, to be produced Audio is listened in raw vibration.Specifically, when wear-type electronic equipment 500 is worn on the head of user, osteoacusis unit with The head contact of user so that user can perceive to be vibrated caused by osteoacusis unit.
According to the example of the present invention, osteoacusis unit can directly receive the audio signal from audio treatment unit, And vibration is produced according to audio signal.Alternatively, can also be wrapped according to another example of the present invention, wear-type electronic equipment 500 Include the power amplification unit set in a fixation unit.Power amplification unit can receive the audio letter from audio treatment unit Number, and amplify the audio signal, wherein, the audio signal after amplification is ac voltage signal.Power amplification unit will can amplify Audio signal afterwards is applied to osteoacusis unit.Audio-signal-driven after osteoacusis unit can be exaggerated is vibrated with producing.
By setting osteoacusis unit in wear-type electronic equipment, user can be set by using wear-type electronics is arranged on Standby inner side osteoacusis unit listens to audio, improves audio output quality.Further, since need not be in wear-type electronic equipment The middle traditional audio playing unit for setting such as loudspeaker, earphone etc, is being reduced shared by wear-type electronic equipment The content that user is listened to is prevented to be known by other people while space.
Moreover it is preferred that in the first and/or second support arm, can along from support arm on the inside of(That is wear-type electronics is set During for the head for being worn on user, close to the side of user's head)To the outside of support arm(I.e. wear-type electronic equipment is worn When being worn over the head of user, the side away from user's head)Order, set gradually the osteoacusis unit contacted with user's head Protective layer, body, the data transmission unit layer of the first osteoacusis unit(Such as it may include data line)And input block Layer.In addition, in the case where input block includes above-mentioned induction panel, input block layer may include sensitive surface flaggy and sensing The protective layer of panel layer, in addition, also wall can be set between data transmission unit layer and sensitive surface flaggy, to prevent data Interference of the electric signal for sensitive surface flaggy in transmission unit layer.By the structure, reasonably there is provided osteoacusis unit and Position of the input block on wear-type electronic equipment, wear-type electronics is optimized while audio output quality is improved and is set Standby product design, and convenient for users to use and then operation.Fig. 6 is to show the electronic equipment shown in Fig. 1 Another illustrative case explanation figure.In the example depicted in fig. 6, wear-type electronic equipment 600 includes setting with wear-type electronics Fixed cell 110, graphics processing unit 120, image transmitting unit 130, display unit 140 and input block 150 in standby 100 Similar fixed cell, graphics processing unit, image transmitting unit and input block, and with it is in electronic equipment 100 and With reference to the similar display unit of the display unit of Fig. 3 descriptions, therefore will not be repeated here.
As shown in fig. 6, the fixed cell of wear-type electronic equipment 600 includes headband part 610 and the He of connection member 620 630.With above in association with the display unit described described in Fig. 3 similarly, the display unit of wear-type electronic equipment 600 includes picture frame Part 640 and 650 and the lens component 660 and 670 being connected respectively with frame member 640 and 650.The He of connection member 620 630 are connected with frame member 640 and 650 respectively.When tape member 610 is worn on the head of user right overhead, headband part 610 Being capable of flexible deformation so that the left and right ear of user is pressed against at the first end of headband part and the second end respectively.Preferably, as schemed Shown in 6, the both ends of headband part 610 can be respectively arranged with the first housing 611 and the first housing 612.Selectively, can be first Loudspeaker unit is set in the housing 612 of housing 611 and first, in order to which user can be carried out using wear-type electronic equipment 600 Audio plays.
The input block of wear-type electronic equipment 600 may be provided at, headband part 610, connection member 620 and 630 and/or On frame member 640 and 650.For example, when input block includes induction panel, input block may be provided at connection member 620 On 630, in order to user carry out for example touch or close to etc input operation.And when input block includes IMAQ mould During block, the image capture module in input block may be provided on headband part 610 and/or frame member 640 and 650, so as to It is acquired in the operating body in spatial control region domain.
Below, reference picture 7 illustrates the input method of embodiments of the invention.Fig. 7 is depicted according to embodiments of the present invention Input method 700 flow chart.Input method 700 can be applied to the wear-type electronic equipment shown in Fig. 1 to 6.Above Wear-type electronic equipment according to embodiments of the present invention is described in detail combined Fig. 1 to Fig. 6.Therefore in order to describe letter It is clean, repeat no more.
As shown in fig. 7, in step s 701, show the first image.For example, it can be prestored in wear-type electronic equipment Image file.The first image 120 to be shown can be obtained in step s 701 can obtain stored image file, and to this article Part performs play operation, to show the first image.In another example wear-type electronic equipment may also include transmission/reception unit, to connect Receive the image file sent from another electronic equipment.Received image file can be obtained in step s 701, and to this document Play operation is performed, to show the first image.
In step S702, first position of the operating body relative to the wear-type electronic equipment can detect.Then in step In rapid S703, the second image is generated according to the first position detected in step S702, and according to the second image and first Image, generate the 3rd image.Finally, the 3rd generated image can be shown in step S704.
Selectively, before step S703, the method shown in Fig. 7, which may also include, obtains first position relative to first The mapping position of image, determines whether mapping position is located in the first image, and when determination mapping position is located at the first image When middle, the mapping for indicating mapping position is generated in the first image and is identified.So as to help user to include mapping by viewing Mark the first image and determine the position of operating body, avoid mistake input.
In addition, according to the example of the present invention, it may include multiple 3rd characters in the first image.It is raw in the first image After being identified into the mapping for indicating mapping position, the input method 700 shown in Fig. 7 may also include reception and come from operating body Character entering function.And in step S703, according to character entering function, determine to mark with mapping in multiple 3rd characters Second target character corresponding to knowledge, then generate the second image on the second target character.
Fig. 8 a to Fig. 8 c are to show that the mapping by being used to indicate mapping position according to the generation of the present invention identifies, and Generate the explanation figure of a signal situation of the second image of the second target character corresponding with mapping mark.In Fig. 8 a to Fig. 8 c In shown example, wear-type electronic equipment includes induction panel 810.According to step S701, the first image is shown.Such as Fig. 8 a institutes Show, the keyboard area for including multiple 3rd characters is may include in the first image 810.
As shown in Figure 8 b, when according to step S702, detecting operating body, and obtaining operating body and contacted with induction panel, grasp Make position of the body on induction panel 820, using the first position as operating body relative to wear-type electronic equipment.Then, such as It is upper described, mapping position of the first position relative to the first image is obtained, and generated in the first image for indicating mapped bits The mapping mark put(Shown in finger mark 830 in as shown in Figure 8 c).
In the example shown in Fig. 8 a to Fig. 8 c, when receiving the character entering function from operating body, as described above, In step S703, according to character entering function, the second target word corresponding with mapping mark is determined in multiple 3rd characters Symbol, then generates the second image on the second target character.Specifically, when detecting connecing for operating body and induction panel 820 When touching, in step S703, it may be determined that whether operating body contact with induction panel 820 meets the first formation condition.For example, the One formation condition can be very first time threshold value.When induction panel 820 indicates operating body and sense for the testing result of operating body When answering the time of contact of the ad-hoc location on panel to be more than very first time threshold value, the first generation bar is can determine that in step S703 Part is satisfied, and then can determine that the current input operation of operating body is character entering function, and is given birth in response to character entering function Into the first control instruction, then according to the first control instruction, the position generation second based on current operation body on induction panel Image.
Fig. 9 a to Fig. 9 e are to show that the mapping by being used to indicate mapping position according to the generation of the present invention identifies, and Generate the explanation figure of another signal situation of the second image of the second target character corresponding with mapping mark.In Fig. 9 a to Fig. 9 e In shown example, wear-type electronic equipment includes induction panel 920.According to step S701, the first image is shown.Such as Fig. 9 a institutes Show, the keyboard area for including multiple 3rd characters is may include in the first image 910.
Then according to step S702, operating body is detected, and is obtained when operating body and the distance between induction panel 920 are small When preset distance, the position b being projected on induction panel of operating body(As shown in figure 9b)And/or when operating body with When the distance between induction panel is zero, position of the operating body on induction panel, using as operating body relative to wear-type electricity The first position of sub- equipment.Then, as described above, obtaining mapping position of the first position relative to the first image, and first The mapping for indicating mapping position is generated in image to identify(Shown in finger mark 930 in as is shown in fig. 9 c).
In the example shown in Fig. 9 a to Fig. 9 e, when detecting between operating body and induction panel 920 that distance is less than or waits When preset distance H1, as described above, in step S703, according to character entering function, determined in multiple 3rd characters with Second target character corresponding to mapping mark, then generates the second image on the second target character.For example, shown in Fig. 9 d Example in, when detecting that distance is less than or equal to preset distance H1 between operating body and induction panel 920, in step S703 In, it may be determined that whether the operating body that the distance between induction panel is less than or equal to preset distance meets that the second formation condition is The second formation condition of no satisfaction.Second formation condition can be the first distance threshold H2 between operating body and induction panel, its In the first distance threshold may be less than or equal to preset distance.When induction panel 920 indicates operation for the testing result of operating body When the distance between body and induction panel are gradually decreased as the first distance threshold H2, in step S703, it may be determined that meet second Formation condition, and the second control instruction is generated, and in response to the second control instruction, sense is projected according to current operation body The position on panel is answered to generate the second image.In another example the second formation condition can be between operating body and induction panel away from From being gradually reduced, and operating body is projected in when the distance between operating body and induction panel are less than or equal to preset distance Position on induction panel and position of the operating body on induction panel when the distance between operating body and induction panel are zero The distance between difference be less than or equal to second distance threshold value.As shown in figure 9e, when induction panel is for the testing result of operating body The distance between instruction operating body and induction panel are gradually reduced, and the distance between operating body and induction panel be less than or The position being projected on induction panel of operating body and the distance between operating body and induction panel are during equal to preset distance When distance difference of the operating body between the position on induction panel is less than or equal to second distance threshold value when zero, in step S703 In, it may be determined that meet the second formation condition, and generate the second control instruction, and in response to the second control instruction, according to behaviour Make position of the operating body on induction panel when the distance between body and induction panel are zero, generate the second image.
Figure 10 a to Figure 10 c are to show that the mapping by being used to indicate mapping position according to the generation of the present invention identifies, and And the explanation figure of a signal situation of the second image of generation the second target character corresponding with mapping mark.In Figure 10 a extremely In example shown in Figure 10 c, wear-type electronic equipment includes image capture module.According to step S701, the first image is shown.Such as Shown in Figure 10 a, the keyboard area for including multiple 3rd characters is may include in the first image 1010.
As shown in fig. lob, according to step S702, the spatial control region by image capture module to wear-type electronic equipment Domain 1020 carries out IMAQ, and obtains collection result;And according to collection result, determine operating body in spatial control region domain The 3rd position operating body in 1020 relative to wear-type electronic equipment first position, wherein when wear-type electronic equipment is worn When being worn over the head of user, user can watch the first shown image in the first direction, and image capture module is along the Two directions are acquired to spatial control region domain, and wherein the angle between first direction and second direction is in predetermined angular range It is interior.Then, as described above, obtaining mapping position of the first position relative to the first image, and generate and be used in the first image Indicate the mapping mark of mapping position(Shown in finger mark 1030 in as shown in figure l0c).
In the example shown in Figure 10 a to Figure 10 c, when receiving the character entering function from operating body, as above institute State, in step S703, according to character entering function, the second target corresponding with mapping mark is determined in multiple 3rd characters Character, then generate the second image on the second target character.Specifically, when according to collection result determine operating body be located at When in spatial control region domain 1020, in step S703, it may be determined that whether the operating body in spatial control region domain 1020 meets Three formation conditions.For example, the 3rd formation condition can be the second time threshold.When detection of the induction panel 820 for operating body As a result indicate that the time kept of the operating body on the ad-hoc location in spatial control region domain 1020 is more than the second time threshold It during value, can determine that the 3rd formation condition is satisfied in step S703, and then can determine that the current input operation of operating body is word Input operation is accorded with, and the 3rd control instruction is generated in response to character entering function, then according to the 3rd control instruction, based on current Operating body is in the position in spatial control region domain, the second image of generation.
In addition, according to another example of the present invention, wear-type electronic equipment may include multiple input areas.In this situation Under, in step S702, IMAQ can be carried out to the spatial control region domain of wear-type electronic equipment by image capture module, And obtain collection result;And according to collection result, threeth position of the operating body in spatial control region domain is determined, to be used as behaviour Make first position of the body phase for wear-type electronic equipment.
Preferably, multiple input areas may correspond to multiple character zones, the character included in plurality of character zone It is different.Each character zone includes the first character positioned at the center of character zone, and set around the first character Multiple second characters.In step S703, target character region can be determined in multiple character zones according to target input region; And first object character is determined in target character region according to first position, and generate second on first object character Image.According to this example, because the number of input area is less, and the structures such as projection can be set in input area so that User can also be easier to recognize each touch area in the case of it can't see, and rapidly be inputted.
According to the control method of above example of the present invention, user is when wearing wear-type electronic equipment, it is not necessary to passes through Such as the external input device of mouse, keyboard etc, complicated input can be achieved in itself using only wear-type electronic equipment, from And constraint of the external input device for user has been broken away from, and it is easy to user to carry and use in the case of movement.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein Member and algorithm steps, it can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware With the interchangeability of software, the composition and step of each example are generally described according to function in the above description.This A little functions are performed with hardware or software mode actually, application-specific and design constraint depending on technical scheme.This Art personnel can realize described function using distinct methods to each specific application, but this realization It is it is not considered that beyond the scope of this invention.
It should be appreciated by those skilled in the art that can be dependent on design requirement and other factorses carries out various repair to the present invention Change, combine, partly combining and replacing, as long as they are in the range of appended claims and its equivalent.

Claims (26)

1. a kind of wear-type electronic equipment, including:
Fixed cell, the wear-type electronic equipment can be worn on the head of user by the fixed cell;
Graphics processing unit, it is arranged in the fixed cell, configures to obtain the first image to be shown;
Described first image is sent to display unit by image transmitting unit, configuration;
The display unit, configure show described image processing unit obtain the first image, wherein the display unit with The fixed cell is connected, when the display unit is worn by the user on head by the fixed cell, the display At least Part I in unit is located in the viewing area of user and towards the user;
Input block, it is arranged on the fixed cell and/or the display unit, configures to detect operating body relative to described The first position of wear-type electronic equipment;
Described image processing unit is further configured to generate the second image according to the first position, and according to second image And described first image, generate the 3rd image;And
The display unit is further configured to show the 3rd image;
Wherein,
The input block includes multiple input areas,
The input block detects to the operating body, to be determined and the operating body pair in the multiple input area The target input region answered, and operation of the operating body in the target input region is detected, with according to institute State operation of the operating body in the target input region and obtain the first position;
Wherein, the input block is touch input unit, including multiple tactic touch input areas, when the operation When body contacts with touch input unit, the touch input unit determines operating body with touching in the multiple touch input area Target touch input area when touching input block contact belonging to its touch location, and to operating body in target touch input Operation in region is further detected, described in being obtained according to operation of the operating body in the target touch input area First position;
Or
The input block includes image capture module and picture recognition module, and described image acquisition module is to wear-type electricity The spatial control region domain of sub- equipment carries out IMAQ, and obtains collection result;Described image identification module is according to IMAQ The collection result that module obtains, determines threeth position of the operating body in the spatial control region domain, using relative as operating body In the first position of the wear-type electronic equipment.
2. wear-type electronic equipment as claimed in claim 1, wherein
The input block includes induction panel,
The induction panel detects the operating body, and institute is used as to obtain the second place of the operating body on the induction panel State first position.
3. wear-type electronic equipment as claimed in claim 2, wherein
The second place is that the operating body is on the induction panel when the operating body contacts with the induction panel Position.
4. wear-type electronic equipment as claimed in claim 3, in addition to:
First instruction generation unit, configuration carrys out the testing result according to the induction panel for the operating body, it is determined that described Whether operating body contact with the induction panel meets the first formation condition, and when determination first formation condition is expired When sufficient, the first control instruction is generated,
Described image processing unit generates the second image according to first control instruction.
5. wear-type electronic equipment as claimed in claim 2, wherein
The second place is included when the distance between the operating body and the induction panel are less than or equal to preset distance, The position being projected on the induction panel of the operating body.
6. wear-type electronic equipment as claimed in claim 5, wherein
The second place also includes when the distance between the operating body and the induction panel are zero, and the operating body exists Position on the induction panel.
7. the wear-type electronic equipment as described in claim 5 or 6, in addition to:
Second instruction generation unit, configuration carrys out the testing result according to the induction panel for the operating body, it is determined that and institute State whether the distance between induction panel meets the second formation condition less than or equal to the operating body of preset distance, and work as When determining that second formation condition is satisfied, the second control instruction is generated,
Described image processing unit generates the second image according to second control instruction.
8. wear-type electronic equipment as claimed in claim 1,
Wherein when the wear-type electronic equipment is worn on the head of user, user can watch described aobvious in the first direction Show the described first image that the described image processing unit that unit is shown obtains, and described image acquisition module is in a second direction The spatial control region domain is acquired, wherein the angle between the first direction and the second direction is in predetermined angle In the range of.
9. wear-type electronic equipment as claimed in claim 8, in addition to:
3rd instruction generation unit, configuration comes according to the collection result, it is determined that the behaviour in the spatial control region domain Make whether body meets the 3rd formation condition, and when it is determined that the 3rd formation condition is satisfied, generate the 3rd control instruction,
Described image processing unit generates the second image according to the 3rd control instruction.
10. wear-type electronic equipment as claimed in claim 1, wherein
The multiple input area corresponds to multiple character zones, wherein the mutual not phase of the character included in the multiple character zone Together,
Each character zone includes:
First character, positioned at the center of the character zone;And
Multiple second characters, set around first character,
Described image processing unit determines target character region according to the target input region in the multiple character zone, First object character is determined in the target character region according to first position, and generated on the first object character Second image.
11. the wear-type electronic equipment as described in claim 2 or 8, wherein
Described image processing unit is further configured to obtain mapping position of the first position relative to described first image, and When it is determined that the mapping position is located in described first image, generated in described first image for indicating the mapped bits The mapping mark put.
12. wear-type electronic equipment as claimed in claim 11, wherein
Described first image includes multiple 3rd characters,
The input block is further configured to receive the character entering function from operating body, and
Described image processing unit determines to mark with the mapping according to the character entering function in the multiple 3rd character Second target character corresponding to knowledge, and generate second image on second target character.
13. wear-type electronic equipment as claimed in claim 1, wherein
Described image transmission unit includes:Data line, it is arranged in the fixed cell, configures described first image It is sent to display unit,
The display unit, comprising:
Frame member;
Lens component, it is connected with the frame member;
First display module, it is arranged in the frame member, configuration carrys out the first video transmitted according to the data line Signal shows the first image;
First optical system, it is arranged in the frame member, configures to receive the light sent from first display module, And the light to being sent from first display module carries out light path converting, to form the first amplification virtual image;
Light Jing Guo first optical system is sent to the second light guide member by the first light guide member, configuration;
Second light guide member, is arranged in the lens component, the light that configuration transmits first light guide member Reflected to the glasses for the user for wearing the wear-type electronic equipment.
14. wear-type electronic equipment as claimed in claim 13, wherein the wear-type electronic equipment sets for glasses type electronic It is standby, wherein the fixed cell includes:
First support arm, comprising the first pontes and the first holding part, wherein the first pontes is configured to connect The frame member and first holding part;
Second support arm, comprising second connecting portion point and the second holding part, wherein second connecting portion distribution is put to connect The frame member and second holding part;And
3rd holding part, it is arranged on the frame member, and
First holding part, second holding part and the 3rd holding part are configured the wear-type electronics Equipment is maintained at the head of user.
15. a kind of input method, applied to wear-type electronic equipment, the input method includes:
Show the first image;
Detect first position of the operating body relative to the wear-type electronic equipment;
Second image is generated according to the first position, and according to second image and described first image, generation the 3rd Image;And
Show the 3rd image;
The wear-type electronic equipment includes multiple input areas, wherein the detection operating body is relative to the wear-type electronics The first position of equipment includes:
The operating body is detected, to determine that target corresponding with the operating body inputs in the multiple input area Region;And
Operation of the operating body in the target input region is detected, with according to the operating body in the target Operation in input area obtains the first position;
Wherein, it is described detection operating body relative to the wear-type electronic equipment first position the step of include:Pass through input Unit detects first position of the operating body relative to the wear-type electronic equipment;
The input block is touch input unit, including multiple tactic touch input areas, described to the operation Body is detected, to determine target input region corresponding with the operating body in the multiple input area;And to institute Operation of the operating body in the target input region is stated to be detected, with according to the operating body in the target input region In operation obtain the first position the step of include:
When the operating body contacts with touch input unit, the touch input unit is in the multiple touch input area The target touch input area belonging to its touch location when operating body contacts with touch input unit is determined, and to operating body Operation in target touch input area is further detected, with according to operating body in the target touch input area Operation obtain the first position;
Or
The input block includes image capture module and picture recognition module, described that the operating body is detected, with Target input region corresponding with the operating body is determined in the multiple input area;And to the operating body in the mesh Operation in mark input area is detected, described in being obtained according to operation of the operating body in the target input region The step of first position, includes:
Described image acquisition module carries out IMAQ to the spatial control region domain of the wear-type electronic equipment, and is gathered As a result;The collection result that described image identification module obtains according to described image acquisition module, determine operating body in the space The 3rd position in control area, using the first position as operating body relative to the wear-type electronic equipment.
16. input method as claimed in claim 15, before second image according to first position generation, also wrap Include:
Obtain mapping position of the first position relative to described first image;
Determine whether the mapping position is located in described first image;And
When it is determined that the mapping position is located in described first image, generated in described first image for indicating described reflect Penetrate the mapping mark of position.
17. input method as claimed in claim 16, wherein
Described first image includes multiple 3rd characters,
The input method also includes:
The character entering function from the operating body is received,
It is described to be included according to the first position the second image of generation:
According to the character entering function, the second target corresponding with the mapping mark is determined in the multiple 3rd character Character;And
Generate second image on second target character.
18. input method as claimed in claim 17, the wear-type electronic equipment includes induction panel, wherein the detection Operating body includes relative to the first position of the wear-type electronic equipment:
The operating body is detected, the first position is used as to obtain the second place of the operating body on the induction panel.
19. input method as claimed in claim 18, wherein
The second place is that the operating body is on the induction panel when the operating body contacts with the induction panel Position.
20. input method as claimed in claim 19, wherein
The character entering function of the reception from the operating body includes:
Determine whether operating body contact with the induction panel meets the first formation condition;
When it is determined that first formation condition is satisfied, determine that the current input operation of the operating body is grasped for character input Make,
It is described according to the character entering function, determine to map mark corresponding second with described in the multiple 3rd character Target character includes:
When it is determined that the current input operation of the operating body is character entering function, the first control instruction is generated;And
Second image is generated according to first control instruction.
21. input method as claimed in claim 18, wherein
The second place is included when the distance between the operating body and the induction panel are less than or equal to preset distance, The position being projected on the induction panel of the operating body.
22. input method as claimed in claim 21, wherein
The second place also includes when the distance between the operating body and the induction panel are zero, and the operating body exists Position on the induction panel.
23. the input method as described in claim 21 or 22, in addition to:
The character entering function of the reception from the operating body includes:
It is determined that whether the operating body for being less than or equal to preset distance with the distance between the induction panel meets the second life Into condition;And
When it is determined that second formation condition is satisfied, the current input operation for determining the operating body is the character input Operation,
It is described according to the character entering function, determine to map mark corresponding second with described in the multiple 3rd character Target character includes:
When it is determined that the current input operation of the operating body is character entering function, the second control instruction is generated;And
Second image is generated according to second control instruction.
24. input method as claimed in claim 17,
Wherein when the wear-type electronic equipment is worn on the head of user, user can watch shown in the first direction Described first image, and described image acquisition module is acquired to the spatial control region domain in a second direction, wherein Angle between the first direction and the second direction is in predetermined angular range.
25. input method as claimed in claim 24, wherein
The character entering function of the reception from the operating body includes:
It is determined that whether the operating body in the spatial control region domain meets the 3rd formation condition;
When it is determined that the 3rd formation condition is satisfied, determine that the current input operation of the operating body is grasped for character input Make,
It is described according to the character entering function, determine to map mark corresponding second with described in the multiple 3rd character Target character includes:
When it is determined that the current input operation of the operating body is character entering function, the 3rd control instruction is generated;And
Second image is generated according to the 3rd control instruction.
26. input method as claimed in claim 15, wherein
The multiple input area corresponds to multiple character zones, wherein the mutual not phase of the character included in the multiple character zone Together,
Each character zone includes:
First character, positioned at the center of the character zone;And
Multiple second characters, set around first character,
It is described to be included according to the first position the second image of generation:
Target character region is determined in the multiple character zone according to the target input region;And
First object character is determined in the target character region according to first position, and generated on the first object word Second image of symbol.
CN201210593624.XA 2012-12-31 2012-12-31 Wear-type electronic equipment and input method Active CN103914128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210593624.XA CN103914128B (en) 2012-12-31 2012-12-31 Wear-type electronic equipment and input method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210593624.XA CN103914128B (en) 2012-12-31 2012-12-31 Wear-type electronic equipment and input method

Publications (2)

Publication Number Publication Date
CN103914128A CN103914128A (en) 2014-07-09
CN103914128B true CN103914128B (en) 2017-12-29

Family

ID=51039881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210593624.XA Active CN103914128B (en) 2012-12-31 2012-12-31 Wear-type electronic equipment and input method

Country Status (1)

Country Link
CN (1) CN103914128B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6340301B2 (en) * 2014-10-22 2018-06-06 株式会社ソニー・インタラクティブエンタテインメント Head mounted display, portable information terminal, image processing apparatus, display control program, display control method, and display system
CN104391575A (en) * 2014-11-21 2015-03-04 深圳市哲理网络科技有限公司 Head mounted display device
KR20160063812A (en) * 2014-11-27 2016-06-07 삼성전자주식회사 Method for configuring screen, electronic apparatus and storage medium
CN106155284B (en) * 2015-04-02 2019-03-08 联想(北京)有限公司 Electronic equipment and information processing method
CN107466396A (en) * 2016-03-22 2017-12-12 深圳市柔宇科技有限公司 Head-mounted display apparatus and its control method
US11022802B2 (en) * 2018-09-28 2021-06-01 Apple Inc. Dynamic ambient lighting control
CN111445393B (en) * 2019-10-22 2020-11-20 合肥耀世同辉科技有限公司 Electronic device content driving platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673161A (en) * 2009-10-15 2010-03-17 复旦大学 Visual, operable and non-solid touch screen system
CN101719014A (en) * 2008-10-09 2010-06-02 联想(北京)有限公司 Input display method
WO2011106798A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
CN102445988A (en) * 2010-10-01 2012-05-09 索尼公司 Input device
CN102779000A (en) * 2012-05-03 2012-11-14 乾行讯科(北京)科技有限公司 User interaction system and method
CN102812417A (en) * 2010-02-02 2012-12-05 寇平公司 Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719014A (en) * 2008-10-09 2010-06-02 联想(北京)有限公司 Input display method
CN101673161A (en) * 2009-10-15 2010-03-17 复旦大学 Visual, operable and non-solid touch screen system
CN102812417A (en) * 2010-02-02 2012-12-05 寇平公司 Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands
WO2011106798A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
CN102445988A (en) * 2010-10-01 2012-05-09 索尼公司 Input device
CN102779000A (en) * 2012-05-03 2012-11-14 乾行讯科(北京)科技有限公司 User interaction system and method

Also Published As

Publication number Publication date
CN103914128A (en) 2014-07-09

Similar Documents

Publication Publication Date Title
CN103914128B (en) Wear-type electronic equipment and input method
US10795445B2 (en) Methods, devices, and systems for determining contact on a user of a virtual reality and/or augmented reality device
US10342428B2 (en) Monitoring pulse transmissions using radar
US10635179B2 (en) Apparatus, systems, and methods for facilitating user interaction with electronic devices
KR102349716B1 (en) Method for sharing images and electronic device performing thereof
JP6340301B2 (en) Head mounted display, portable information terminal, image processing apparatus, display control program, display control method, and display system
US10564717B1 (en) Apparatus, systems, and methods for sensing biopotential signals
US11599193B1 (en) Finger pinch detection
WO2020205767A1 (en) Methods and apparatus for gesture detection and classification
CN104270623B (en) A kind of display methods and electronic equipment
US11630520B1 (en) Systems and methods for sensing gestures via vibration-sensitive wearables donned by users of artificial reality systems
CN111868666A (en) Method, device and system for determining contact of a user of a virtual reality and/or augmented reality device
KR101203921B1 (en) Information providing apparatus using an eye tracking and local based service
KR20220125362A (en) Position Tracking System for Head-Worn Display Systems Including Angle Sensing Detectors
CN202975477U (en) Head-mounted electronic equipment
US20090245572A1 (en) Control apparatus and method
CN106937143A (en) The control method for playing back and device and equipment of a kind of virtual reality video
US20210160150A1 (en) Information processing device, information processing method, and computer program
CN103713387A (en) Electronic device and acquisition method
CN103852890A (en) Head-mounted electronic device and audio processing method
US11579704B2 (en) Systems and methods for adaptive input thresholding
RU110845U1 (en) MOBILE DEVICE MANAGEMENT SYSTEM BY USER TURNING THE USER'S HEAD
US20220191296A1 (en) Devices, systems, and methods for modifying features of applications based on predicted intentions of users
WO2022113834A1 (en) System, imaging device, information processing device, information processing method, and information processing program
WO2022220048A1 (en) System, information processing method, and information processing program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant