CN103905763A - Processing method and electronic equipment - Google Patents

Processing method and electronic equipment Download PDF

Info

Publication number
CN103905763A
CN103905763A CN201210589682.5A CN201210589682A CN103905763A CN 103905763 A CN103905763 A CN 103905763A CN 201210589682 A CN201210589682 A CN 201210589682A CN 103905763 A CN103905763 A CN 103905763A
Authority
CN
China
Prior art keywords
image
document
input
acquisition units
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210589682.5A
Other languages
Chinese (zh)
Other versions
CN103905763B (en
Inventor
张柳新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210589682.5A priority Critical patent/CN103905763B/en
Publication of CN103905763A publication Critical patent/CN103905763A/en
Application granted granted Critical
Publication of CN103905763B publication Critical patent/CN103905763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

Provided are a processing method and electronic equipment. The electronic equipment comprises a displaying unit and an operation input part. The processing method comprises the steps that through the operation input part, first operation input for a target is received, and the target comprises a plurality of sub-targets; whether the first operation input is first operation or second operation on the target is judged; under the situation that the first operation input is first operation, a specific sub-target in the target is opened and is displayed on the displaying unit; and under the situation that the first operation input is second operation, the multiple sub-targets in the target are displayed on the displaying unit. According to the processing method and the electronic equipment, different operation input is carried out on the target comprising the multiple sun-targets, the specific sub-target or all the sub-targets in the target can be displayed selectively, and accordingly storage and operation of the target comprising the multiple sub-targets are convenient.

Description

Processing method and electronic equipment
Technical field
The present invention relates to image and show field, and relate more specifically to processing method and electronic equipment that a kind of image shows.
Background technology
Day by day universal along with the portable digital such as mobile phone, digital camera photographing device, recorded the drop of oneself life and with friend share oneself happy in the mode of taking pictures by these equipment every day, become many people's habits and customs.
Although people can carry these digital photographing equipment at ordinary times, but due to these equipment generally can be not hand held by people's moment in, but be conventionally placed in pocket or with in the bag of carrying on the body, therefore for some life in moment occurrence, for example one beautiful lightning, people tend to miss because have little time carry-on digital photographing equipment to take out from pocket or in bag the record to it.
The appearance of this class Wearable glasses device of Google's glasses, makes this problem make moderate progress to a certain extent.Because this class Wearable glasses device can be worn by people in the moment, when running in some life moment when occurrence, can save by equipment the process of taking out from pocket or in bag and directly trigger equipment carry out Taking Pictures recording, thereby reach the quick candid photograph effect of " finding claps ".
However, because people's reaction often exists certain delay, and really take from the camera function of trigger equipment to equipment and a photo, also have certain shooting time lag, therefore for those occurrence moment, react and trigger equipment while taking pictures Deng people, often or can miss that time of real excellence.
Therefore, need a kind of method that promotes Wearable glasses device candid photograph ability, it has produced the package images bag that comprises multiple images, comprises the object of multiple subobjects.Correspondingly, also need a kind of processing to comprise the method for the object of multiple subobjects, it can be according to all subobjects in the different described objects of operation input demonstration or the particular child object in described object.
Summary of the invention
Consider the problems referred to above and proposed the present invention.The invention provides a kind of processing method, it is by carrying out different operation inputs to the object that comprises multiple subobjects, can represent by different way subobject in described object, represent particular child object or whole subobject in described object, be convenient to thus preservation and the operation of the object that comprises multiple subobjects.
According to an aspect of the present invention, a kind of processing method is provided, and for electronic equipment, described electronic equipment comprises display unit and operation inputting part part, the treating method comprises: receive the first operation input to an object by described operation inputting part part, described object comprises multiple subobjects; Judge that described the first operation input is the first operation to described object or the second operation to described object; In the situation that described the first operation input is described the first operation, opens a particular child object in described object and show on described display unit; And in the situation that described the first operation input is described the second operation, the described multiple subobjects in described object are shown on described display unit.
Preferably, in described processing method, described object is package images bag, described package images bag comprises the image of the first quantity, described package images bag utilizes an image in the image of described the first quantity as top layer images, and the each image in the image of described the first quantity is respectively as a subobject, and described top layer images is as described particular child object.
Preferably, in described processing method, described object is document wrapper, described document wrapper comprises the document of the first quantity, described document wrapper utilizes a document in the document of described the first quantity as top document, and the each document in the document of described the first quantity is respectively as a subdocument, and described particular child object is described top document, or described particular child object is the document that generates or edit recently in the document of described the first quantity.
Preferably, in described processing method, described electronic equipment also comprises the first image acquisition units, and described to as if generate by following steps: receive the second operation input by described operation inputting part part; In response to described the second operation input, enter the first mode of operation, and under described the first mode of operation, utilize described the first image acquisition units with very first time interval shooting memory image; Receive the 3rd operation input by described operation inputting part part; In response to described the 3rd operation input, be switched to and enter the second mode of operation from described the first mode of operation, and under described the second mode of operation, utilize described the first image acquisition units to take and memory image with second time interval; And by the image of the image of second quantity of finally taking and store under described the first mode of operation and the 3rd quantity of taking and storing under described the second mode of operation described package images bag of formation of packing, wherein, described the first quantity equals described the second quantity and described the 3rd quantity sum.
Preferably, in described processing method, described package images bag is image stream formatted file, has wherein encapsulated successively the image of described the second quantity and the image of described the 3rd quantity according to shooting order.
Preferably, in described processing method, described electronic equipment is Wearable glasses, and described display unit meets predetermined light transmittance, described operation inputting part part comprises the second image acquisition units, described the first operation is input as by described the second image acquisition units collection user's input gesture, describedly judges that described the first operation input is the first operation to described object or the second operation of described object is comprised: in the time that described input gesture meets first condition, determine that described input gesture is described the first operation; In the time that described input gesture meets second condition, determine that described input gesture is described the second operation, wherein, described first condition is different from described second condition.
Preferably, in described processing method, described second condition at least comprises that the first operating body in described input gesture is away from the second operating body in described input gesture, wherein, described the second image acquisition units comprises the two number of sub images collecting units that are placed in respectively the described Wearable glasses left and right sides, identifies described input gesture by described the second image acquisition units.
According to a further aspect of the invention, provide a kind of electronic equipment, having comprised: memory unit, for storage object, described object comprises multiple subobjects; Operation inputting part part, for inputting the first operation of described object from receiving; Processing unit, for judging that described the first operation input is the first operation to described object or the second operation to described object; And display unit; Wherein, judge that in described processing unit described the first operation input is that described processing unit is opened a particular child object in described object to the first operation of described object, and the particular child object of opening described in showing on described display unit; Judge that in described processing unit described the first operation input is that described processing unit shows the described multiple subobjects in described object on described display unit to the second operation of described object.
Preferably, in described electronic equipment, described object is package images bag, described package images bag comprises the image of the first quantity, the top layer images of described processing unit using an image in the image of described the first quantity as described package images bag, and the each image in the image of described the first quantity is respectively as a subobject, and described top layer images is as described particular child object.
Preferably, in described electronic equipment, described object is document wrapper, described document wrapper comprises the document of the first quantity, the top document of described processing unit using a document in the document of described the first quantity as described document wrapper, and the each document in the document of described the first quantity is respectively as a subdocument, and described particular child object is described top document, or described particular child object is the document that generates or edit recently in the document of described the first quantity.
Preferably, described electronic equipment also comprises: the first image acquisition units, wherein, when receive the second operation input from described operation inputting part part, described processing unit enters the first mode of operation, and under described the first mode of operation, utilizes described the first image acquisition units with very first time interval shooting memory image; When receive the 3rd operation input from described operation inputting part part, described processing unit is switched to and enters the second mode of operation from described the first mode of operation, and under described the second mode of operation, utilizes described the first image acquisition units to take and memory image with second time interval; And described processing unit is by the image of the image of second quantity of finally taking and store under described the first mode of operation and the 3rd quantity of taking and storing under described the second mode of operation described package images bag of formation of packing, wherein, described the first quantity equals described the second quantity and described the 3rd quantity sum.
Preferably, in described electronic equipment, described package images bag is image stream formatted file, has wherein encapsulated successively the image of described the second quantity and the image of described the 3rd quantity according to shooting order.
Preferably, described electronic equipment is Wearable glasses, and described display unit meets predetermined light transmittance, described operation inputting part part comprises the second image acquisition units, this second image acquisition units is for gathering user's input gesture, wherein, described processing unit judges that described the first operation input is the first operation to described object or the second operation of described object is comprised: described in the time that described input gesture meets first condition, processing unit determines that described input gesture is described the first operation; Described in the time that described input gesture meets second condition, processing unit determines that described input gesture is described the second operation, and wherein, described first condition is different from described second condition.
Preferably, in described electronic equipment, described second condition at least comprises that the first operating body in described input gesture is away from the second operating body in described input gesture, wherein, described the second image acquisition units comprises the two number of sub images collecting units that are placed in respectively the described Wearable glasses left and right sides, identifies described input gesture by described the second image acquisition units.
Utilize according to the processing method of the embodiment of the present invention and electronic equipment, the encapsulation that utilizes special file format to realize file is preserved, and can suitably from wrapper, select suitable file file or top document by default, improve flexibility and the convenience of file operation.
Accompanying drawing explanation
Embodiments of the present invention is described in detail in conjunction with the drawings, and above and other objects of the present invention, feature, advantage will become apparent, wherein:
Fig. 1 is the indicative flowchart illustrating according to the processing method 100 of the embodiment of the present invention.
Fig. 2 is the process schematic diagram illustrating according to the generation package images bag of the embodiment of the present invention.
Fig. 3 is the indicative flowchart illustrating according to the process of the generation package images bag of the embodiment of the present invention.
Fig. 4 is the process schematic diagram illustrating according to the processing package images bag of the embodiment of the present invention.
Fig. 5 is the schematic diagram illustrating according to the edited image wrapper of the embodiment of the present invention.
Fig. 6 is the schematic block diagram illustrating according to the electronic equipment of the embodiment of the present invention.
Embodiment
, describe the processing method 100 according to the embodiment of the present invention with reference to Fig. 1 below, this processing method is for electronic equipment, and described electronic equipment comprises display unit and operation inputting part part.
Start at step S101 according to the processing method 100 of the embodiment of the present invention.
At step S110, receive the first operation input to an object by described operation inputting part part, described object comprises multiple subobjects.
Then,, at step S120, judge that described the first operation input is the first operation to described object or the second operation to described object.
In step S120, judge that described the first operation input is to the first operation of described object, advances to step S130 according to the processing method 100 of the embodiment of the present invention.At step S130, open a particular child object in described object and show on described display unit.
Otherwise, in step S120, judge that described the first operation input is to the second operation of described object, advances to step S140 according to the processing method 100 of the embodiment of the present invention.At step S140, the described multiple subobjects in described object are shown on described display unit.
Finally, finish at step S199 according to the processing method 100 of the embodiment of the present invention.
As an example, described object is package images bag, described package images bag comprises the image of the first quantity, described package images bag utilizes an image in the image of described the first quantity as top layer images, and the each image in the image of described the first quantity is respectively as a subobject, and described top layer images is as described particular child object.
As another example, described object is document wrapper, described document wrapper comprises the document of the first quantity, described document wrapper utilizes a document in the document of described the first quantity as top document, and the each document in the document of described the first quantity is respectively as a subdocument, described particular child object is described top document, or described particular child object is the document that generates or edit recently in the document of described the first quantity.
As another example, described object is video encapsulation, described video encapsulation comprises the video of the first quantity, described package images bag utilizes the video video by default in the video of described the first quantity, and the each video in the video of described the first quantity is respectively as a subobject, and described acquiescence video is as described particular child object.
To liking package images bag as example, carry out the formation of Description Image wrapper take described with reference to figure 2-3 below.In the case, described electronic equipment need to comprise the first image acquisition units, takes included image in described package images bag by described the first image acquisition units.
First, after user starts the application of taking pictures of described electronic equipment, described first image acquisition units of described electronic equipment starts, and the pre-recorder of described the first image acquisition units starts built in backstage particularly, the image stream that described the first image acquisition units collects is introduced into the first first in first out (FIFO) buffer queue being made up of memory cell array.
Then, in the time that user triggers the camera function of described the first image acquisition units of described electronic equipment, for example user presses the button of taking pictures, photographing command said in voice, or nictation, put first-class all orders that can trigger equipment start to take pictures that set in advance, described the first image acquisition units enters the mechanism of taking pictures, first record image this moment, and the startup of the camera continuous shooting of described the first image acquisition units mechanism is also recorded in this way and enters the mechanism of taking pictures some two field pictures afterwards, is then also sent into the second first in first out (FIFO) buffer queue.Described the second first in first out (FIFO) buffer queue can be same FIFO buffer queue with described the first first in first out (FIFO) buffer queue, can be also different FIFO buffer queues.
Suppose that a two field picture of described the first image acquisition units collection need to store by a memory cell, every collection one two field picture of described the first image acquisition units, this two field picture is deposited in to a memory cell in corresponding FIFO buffer queue, directly wipe from respective cache queue a two field picture of putting at first simultaneously.Read the image of institute's buffer memory from corresponding FIFO buffer queue time, also first read a two field picture of putting at first, and then read out in the image of putting into successively thereafter.In other words, whole depositing in/readout is followed " first in first out (First in first out) " principle, and the picture frame that first deposits buffer queue in also can preferentially be read out from buffer queue.In whole FIFO buffer queue, the order of picture frame gathers the picture frame sequence consensus of external environment with described the first image acquisition units.
In the situation that described the second first in first out (FIFO) buffer queue and described the first first in first out (FIFO) buffer queue are same FIFO buffer queue, described FIFO buffer queue at least can be stored the image of the first quantity, described in no matter in the time that mechanism is taken pictures in startup, in FIFO buffer queue, how many images are stored, after taking pictures mechanism, startup all again the image of the 3rd quantity is stored in described FIFO buffer queue, in other words, the image of described the first quantity can be included in the image of lower the second quantity gathering of the mechanism of pre-recording and the image of the 3rd quantity that gathers under the mechanism of taking pictures, described the first quantity equals described the second quantity and described the 3rd quantity sum.
In the case, finally, the cell stores that can output to successively using the image of described the first quantity in described FIFO buffer queue as image stream data described electronic equipment becomes a special image stream formatted file, for example package images bag, as shown in Figure 2.
Be different FIFO buffer queue at described the second first in first out (FIFO) buffer queue and described the first first in first out (FIFO) buffer queue, a described FIFO buffer queue can be stored in the image of lower the second quantity gathering of the mechanism of pre-recording, and described the 2nd FIFO buffer queue can be stored in the image of lower the 3rd quantity gathering of the mechanism of taking pictures.
In the case, finally, the cell stores that the image of described the 3rd quantity in the image of described the second quantity in a described FIFO buffer queue and described the 2nd FIFO buffer queue can be outputed to described electronic equipment successively as image stream data respectively becomes special image stream formatted file, for example a package images bag.In the case, stored the image of the first quantity altogether in described image stream formatted file, described the first quantity equals described the second quantity and described the 3rd quantity sum.
Described very first time interval can be identical or different from described second time interval, and in the image of described the second quantity, the image resolution ratio of every image can be identical or different from the image resolution ratio of every image in the image of described the 3rd quantity.
Figure 3 illustrates according to the indicative flowchart of the process of the generation package images bag of the embodiment of the present invention.
First,, at step S310, receive the second operation input by described operation inputting part part.Described the second operation input is for triggering the first mode of operation of described the first image acquisition units, the mechanism of for example pre-recording, and described the second operation input can be for example the hardware button of pressing on described electronic equipment, or can be the operating gesture that can trigger described the first mode of operation or the phonetic order setting in advance.
At step S320, in response to described the second operation input, enter the first mode of operation, and under described the first mode of operation, utilize described the first image acquisition units with very first time interval shooting memory image.As shown in Figure 2, in response to described the second operation input, start to utilize described the first image acquisition units with very first time interval shooting memory image at the corresponding time point of the 1st frame, and described the 1st frame that obtains storing in FIFO buffer queue is to n-1 two field picture.
At step S330, receive the 3rd operation input by described operation inputting part part.Described the 3rd operation input is for triggering the second mode of operation of described the first image acquisition units, the mechanism of for example taking pictures, and described the 3rd operation input can be for example the button of taking pictures of pressing on described electronic equipment, or can be the operating gesture that can trigger described the second mode of operation or the phonetic order setting in advance.
At step S340, in response to described the 3rd operation input, be switched to and enter the second mode of operation from described the first mode of operation, and under described the second mode of operation, utilize described the first image acquisition units to take and memory image with second time interval.As shown in Figure 2, be switched to the second mode of operation at the corresponding time point of n frame from described the first mode of operation, utilize described the first image acquisition units take the image of n frame and take and memory image with second time interval with continuous shooting mechanism, and the described n frame that obtains storing in FIFO buffer queue is to n+3 two field picture.
Finally, at step S340, by the image of the image of second quantity of finally taking and store under described the first mode of operation and the 3rd quantity of taking and storing under described the second mode of operation described package images bag of formation of packing.As shown in Figure 2, the package images bag forming comprises that described the 1st frame is to n+3 two field picture, and using described n two field picture as top layer images.
Described package images wraps in more existing BMP in data structure, JPG, there is certain particularity in the image file of the common formats such as PNG, particularly, in described package images bag, be the view data (can be considered " top layer " image of package images bag) in memory cell of described FIFO buffer queue for the view data showing, and the view data in remaining memory cell is also stored in this package images bag in described FIFO buffer queue, but as ancillary data structure (can be considered " other layer " of package images bag), can not be used to show, the top layer images that user sees and the common BMP that only comprises " photographed frame " content, JPG, the image file of the general formats such as PNG has no difference, but from data structure angle, this package images bag also comprises the ancillary data structure that normal image does not have, it is the view data in remaining memory cell in FIFO buffer queue.
The situation that is video encapsulation for described object, also can form described video encapsulation similarly.Be the situation of document wrapper for described object, also can form similarly described document wrapper according to edit session order.
Next, describe according to the process of the processing of the embodiment of the present invention and edited image wrapper with reference to Fig. 4 and Fig. 5.
As describe in conjunction with Fig. 1 above according to the processing method of the embodiment of the present invention, at step S120, judge that described the first operation input is the first operation to described object or the second operation to described object.Described operation inputting part part comprises the second image acquisition units, and described the first operation is input as the user's who gathers by described the second image acquisition units input gesture.The operation of step S120 can specifically comprise: in the time that described input gesture meets first condition, determine that described input gesture is described the first operation; In the time that described input gesture meets second condition, determine that described input gesture is described the second operation, wherein, described first condition is different from described second condition.Described second condition at least comprises that the first operating body in described input gesture is away from the second operating body in described input gesture.
For example, described the first operating body can be first-hand finger, described the second operating body can be second finger, described in the case second condition can be the concrete action (pinch out) widening between described the first finger and second finger, as shown in the left figure in Fig. 5, for example, point and the action of the open mode of second finger to described first from described the first finger and the closure state of second finger, between described the first finger and second finger, distance is that the open mode of the first distance is to the action of the described first to point between second finger distance be second distance open mode, described the first distance is less than described second distance.
This second condition can be the action along z direction of principal axis (being depth direction), with respect to current x-y plane (that is, and display screen place plane on this electronic equipment, or, image acquisition units place plane) two finger spacing amplifying gestures (pinch out).In other words, when user carries out the second operation by the first operating body and the second operating body, the plane (plane that, first finger tip of finger and the finger tip of second finger form) that described the second operating body and described the second operating body form has angle with current X-Y plane.This second condition can be to adopt two finger spacing of z direction to amplify (hereinafter referred to as z-pinch out).In the time detecting that two finger spacing of z direction are amplified the input gesture of (z-pinch out), image stream formatted file launches according to different image frame data in z direction, as shown in the left figure in Fig. 5.
In the case, preferably, described the second image acquisition units can comprise the two number of sub images collecting units that are placed in respectively the described Wearable glasses left and right sides, identify the axial input gesture of described z by two number of sub images collecting units in described the second image acquisition units, thereby judge whether described input gesture meets described second condition.
Particularly, in step S120, judge that described the first operation input is to the second operation of described object, at step S140, shows the described multiple subobjects in described object on described display unit.
For example, at step S140, launch the described multiple subobjects in described object and show on described display unit with overlapped way.Particularly, for described package images bag, at step S140, launch the described multiple images in described package images bag and show on described display unit with overlapped way, as shown in the left figure in Fig. 5, top layer images is presented at the top layer of described package images bag, then shows successively other tomographic image in described package images bag with overlapped way in lower floor.
In addition, when on described display unit, show described package images bag top layer images state (, common state) under, user also can trigger described package images bag by physical button or the input gesture that meets described second condition and be switched to editor's state from described common state, thereby under editor's state, can change the top layer images of described package images bag.
Under this editor's state, all image frame data that comprise in described image stream formatted file (described package images bag) all can be revealed, and check for user.User can select one of them picture frame (hereinafter referred to as " suitable frame "), readjusted " top layer " of described image stream formatted file (described package images bag), replace original " photographed frame ", and then by physical button or special gesture, image stream formatted file is switched back to common state from editor's state, now, described image stream formatted file (described package images bag) in common state is because " top layer " view data for showing is become " suitable frame " from " photographed frame ", therefore the content of image files that user sees becomes the content in " suitable frame ".User can use the method, in described image stream formatted file (described package images bag), find suitable frame,, in the image frame data of prerecording in described the first image acquisition units, find " splendid moment " while shooting, missed that it is shown again, capture fast effect to reach, thereby promote the candid photograph ability of electronic equipment.
For example, user can be by prearranged gesture (for example, stir gesture) or (for example press hardware button on described electronic equipment, upper knob down, LR-button etc.) other tomographic image in described package images bag is shown to the top layer of described package images bag, at the image of having determined " suitable frame " during as top layer images, user can be by another prearranged gesture (for example, click gesture, dwindle spacing (pinch in) gesture) or (for example press hardware button on described electronic equipment, confirming button etc.) confirm that selected suitable images is as top layer images.
Similarly, it is described that to dwindle spacing (pinch in) gesture can be the concrete action of dwindling between the first finger and second finger, as shown in the right figure in Fig. 5, for example, point and the action of the closure state of second finger to described first from described the first finger and the open mode of second finger, the open mode that between described the first finger and second finger, distance is second distance is the action of the open mode of the first distance to distance between described the first finger and second finger, and described the first distance is less than described second distance.
Preferably, it is described that to dwindle spacing (pinch in) gesture can be the action along z direction of principal axis (being depth direction), to dwindle gesture (pinch in) similar with two finger spacing of current x-y plane, and it can be to adopt two finger spacing of z direction to dwindle (hereinafter referred to as z-pinch in).In the time detecting that two finger spacing of z direction are dwindled the input gesture of (z-pinch in), the demonstration of described package images bag is switched to common state, as shown in the right figure in Fig. 5.
Next, with reference to Fig. 6, the electronic equipment 600 according to the embodiment of the present invention is described.
Comprise according to the electronic equipment 600 of the embodiment of the present invention: memory unit 610, operation inputting part part 620, processing unit 630 and display unit 640.
Described memory unit 610 storage objects, described object comprises multiple subobjects.
Described operation inputting part part 620 receives the first operation input to described object.
Described processing unit 630 judges that described the first operation input is the first operation to described object or the second operation to described object.
Judge that in described processing unit 630 described the first operation input is to the first operation of described object, described processing unit 630 is opened a particular child object in described object, and the particular child object of opening described in showing on described display unit 640.
Judge that in described processing unit 630 described the first operation input is that described processing unit 630 shows the described multiple subobjects in described object on described display unit 640 to the second operation of described object.
As an example, described object is package images bag, described package images bag comprises the image of the first quantity, the top layer images of described processing unit 630 using an image in the image of described the first quantity as described package images bag, and the each image in the image of described the first quantity is respectively as a subobject, and described top layer images is as described particular child object.
As another example, described object is document wrapper, described document wrapper comprises the document of the first quantity, the top document of described processing unit 630 using a document in the document of described the first quantity as described document wrapper, and the each document in the document of described the first quantity is respectively as a subdocument, described particular child object is described top document, or described particular child object is the document that generates or edit recently in the document of described the first quantity.
As another example, described object is video encapsulation, described video encapsulation comprises the video of the first quantity, the acquiescence video of described processing unit 630 using a video in the video of described the first quantity as described package images bag, and the each video in the video of described the first quantity is respectively as a subobject, and described acquiescence video is as described particular child object.
Preferably, described electronic equipment 600 also comprises the first image acquisition units 650, and described the first image acquisition units 650 is for gathering image.
When receive the second operation input from described operation inputting part part 620, described processing unit 630 enters the first mode of operation, and under described the first mode of operation, utilizes described the first image acquisition units 650 with very first time interval shooting memory image.
When receive the 3rd operation input from described operation inputting part part 620, described processing unit 630 is switched to and enters the second mode of operation from described the first mode of operation, and under described the second mode of operation, utilizes described the first image acquisition units 650 to take and memory image with second time interval.
Described processing unit 630 is by the image of the image of second quantity of finally taking and store under described the first mode of operation and the 3rd quantity of taking and storing under described the second mode of operation described package images bag of formation of packing, wherein, described the first quantity equals described the second quantity and described the 3rd quantity sum.
Described very first time interval can be identical or different from described second time interval, and in the image of described the second quantity, the image resolution ratio of every image can be identical or different from the image resolution ratio of every image in the image of described the 3rd quantity.
Preferably, described package images bag is image stream formatted file, has wherein encapsulated successively the image of described the second quantity and the image of described the 3rd quantity according to shooting order.
Preferably, described electronic equipment 600 is Wearable glasses, and described display unit 640 meets predetermined light transmittance, and described operation inputting part part 620 comprises the second image acquisition units, and this second image acquisition units is for gathering user's input gesture.
In the case, the input gesture that described operation inputting part part 620 gathers user is as the first operation input to described object.
In the time that described input gesture meets first condition, described processing unit 630 determines that described input gesture is described the first operation.In the time that described input gesture meets second condition, described processing unit 630 determines that described input gesture is described the second operation.Described first condition is different from described second condition.
For example, described second condition at least comprises that the first operating body in described input gesture is away from the second operating body in described input gesture.
For example, described the first operating body can be first-hand finger, described the second operating body can be second finger, and described in the case second condition can be the concrete action (pinch out) widening between described the first finger and second finger, as shown in the left figure in Fig. 5.
This second condition can be the action along z direction of principal axis (being depth direction), similar with two finger spacing amplifying gestures (pinch out) of current x-y plane, and this second condition can be to adopt two finger spacing of z direction to amplify (hereinafter referred to as z-pinch out).In the time detecting that two finger spacing of z direction are amplified the input gesture of (z-pinch out), package images wraps in z direction and launches according to different image frame data, as shown in the left figure in Fig. 5.
In the case, preferably, described the second image acquisition units can comprise the two number of sub images collecting units that are placed in respectively the described Wearable glasses left and right sides, identify described input gesture by described the second image acquisition units, thereby judge whether described input gesture meets described second condition.
In addition, can also utilize as described above the input gesture that meets described second condition to be switched to editor's state from the common browse mode of described package images bag (common state), to select suitable frame as top layer images under editor's state.Then, can switch and get back to common state from editor's state again by dwindling spacing (pinch in) gesture similarly.Described to dwindle spacing (pinch in) gesture can be the concrete action of dwindling between the first finger and second finger, as shown in the right figure in Fig. 5.Similarly, described in, dwindling spacing (pinch in) gesture can be the action along z direction of principal axis (being depth direction).In the time detecting that two finger spacing of z direction are dwindled the input gesture of (z-pinch in), the demonstration of described package images bag is switched to common state, as shown in the right figure in Fig. 5.
Utilize according to the processing method of the embodiment of the present invention and electronic equipment, the encapsulation that utilizes special file format to realize file is preserved, and can suitably from wrapper, select suitable file file or top document by default, improve flexibility and the convenience of file operation.
Although example embodiment has been described with reference to the drawings here, it is only exemplary should understanding above-mentioned example embodiment, and is not intended to limit the scope of the invention to this.Those of ordinary skills can make various changes and modifications therein, and do not depart from scope and spirit of the present invention.Within all such changes and modifications are intended to be included in the desired scope of the present invention of claims.

Claims (14)

1. a processing method, for electronic equipment, described electronic equipment comprises display unit and operation inputting part part, the treating method comprises:
Receive the first operation input to an object by described operation inputting part part, described object comprises multiple subobjects;
Judge that described the first operation input is the first operation to described object or the second operation to described object;
In the situation that described the first operation input is described the first operation, opens a particular child object in described object and show on described display unit; And
In the situation that described the first operation input is described the second operation, the described multiple subobjects in described object are shown on described display unit.
2. processing method as claimed in claim 1, wherein,
Described object is package images bag, described package images bag comprises the image of the first quantity, described package images bag utilizes an image in the image of described the first quantity as top layer images, and the each image in the image of described the first quantity is respectively as a subobject, and described top layer images is as described particular child object.
3. processing method as claimed in claim 1, wherein,
Described object is document wrapper, described document wrapper comprises the document of the first quantity, described document wrapper utilizes a document in the document of described the first quantity as top document, and the each document in the document of described the first quantity is respectively as a subdocument, described particular child object is described top document, or described particular child object is the document that generates or edit recently in the document of described the first quantity.
4. processing method as claimed in claim 2, wherein, described electronic equipment also comprises the first image acquisition units, and described to as if generate by following steps:
Receive the second operation input by described operation inputting part part;
In response to described the second operation input, enter the first mode of operation, and under described the first mode of operation, utilize described the first image acquisition units with very first time interval shooting memory image;
Receive the 3rd operation input by described operation inputting part part;
In response to described the 3rd operation input, be switched to and enter the second mode of operation from described the first mode of operation, and under described the second mode of operation, utilize described the first image acquisition units to take and memory image with second time interval; And
By the image of the image of second quantity of finally taking and store under described the first mode of operation and the 3rd quantity of taking and storing under described the second mode of operation described package images bag of formation of packing,
Wherein, described the first quantity equals described the second quantity and described the 3rd quantity sum.
5. processing method as claimed in claim 4, wherein, described package images bag is image stream formatted file, has wherein encapsulated successively the image of described the second quantity and the image of described the 3rd quantity according to shooting order.
6. processing method as claimed in claim 2, wherein, described electronic equipment is Wearable glasses, and described display unit meets predetermined light transmittance, described operation inputting part part comprises the second image acquisition units, described the first operation is input as by described the second image acquisition units collection user's input gesture
Described the first operation input of described judgement is the first operation to described object or the second operation of described object is comprised:
In the time that meeting first condition, described input gesture determines that described input gesture is described the first operation;
In the time that meeting second condition, described input gesture determines that described input gesture is described the second operation,
Wherein, described first condition is different from described second condition.
7. processing method as claimed in claim 6, wherein, described second condition at least comprises that the first operating body in described input gesture is away from the second operating body in described input gesture,
Wherein, described the second image acquisition units comprises the two number of sub images collecting units that are placed in respectively the described Wearable glasses left and right sides, identifies described input gesture by described the second image acquisition units.
8. an electronic equipment, comprising:
Memory unit, for storage object, described object comprises multiple subobjects;
Operation inputting part part, for inputting the first operation of described object from receiving;
Processing unit, for judging that described the first operation input is the first operation to described object or the second operation to described object; And
Display unit;
Wherein, judge that in described processing unit described the first operation input is that described processing unit is opened a particular child object in described object to the first operation of described object, and the particular child object of opening described in showing on described display unit;
Judge that in described processing unit described the first operation input is that described processing unit shows the described multiple subobjects in described object on described display unit to the second operation of described object.
9. electronic equipment as claimed in claim 8, wherein,
Described object is package images bag, described package images bag comprises the image of the first quantity, the top layer images of described processing unit using an image in the image of described the first quantity as described package images bag, and the each image in the image of described the first quantity is respectively as a subobject, and described top layer images is as described particular child object.
10. electronic equipment as claimed in claim 8, wherein,
Described object is document wrapper, described document wrapper comprises the document of the first quantity, the top document of described processing unit using a document in the document of described the first quantity as described document wrapper, and the each document in the document of described the first quantity is respectively as a subdocument, described particular child object is described top document, or described particular child object is the document that generates or edit recently in the document of described the first quantity.
11. electronic equipments as claimed in claim 9, also comprise: the first image acquisition units, wherein,
When receive the second operation input from described operation inputting part part, described processing unit enters the first mode of operation, and under described the first mode of operation, utilizes described the first image acquisition units with very first time interval shooting memory image;
When receive the 3rd operation input from described operation inputting part part, described processing unit is switched to and enters the second mode of operation from described the first mode of operation, and under described the second mode of operation, utilizes described the first image acquisition units to take and memory image with second time interval; And
Described processing unit is the image of the image of second quantity of finally taking and store under described the first mode of operation and the 3rd quantity of taking and storing under described the second mode of operation described package images bag of formation of packing,
Wherein, described the first quantity equals described the second quantity and described the 3rd quantity sum.
12. electronic equipments as claimed in claim 11, wherein, described package images bag is image stream formatted file, has wherein encapsulated successively the image of described the second quantity and the image of described the 3rd quantity according to shooting order.
13. electronic equipments as claimed in claim 9, wherein, described electronic equipment is Wearable glasses, and described display unit meets predetermined light transmittance, described operation inputting part part comprises the second image acquisition units, and this second image acquisition units is for gathering user's input gesture
Wherein, described processing unit judges that described the first operation input is the first operation to described object or the second operation of described object is comprised:
Described in the time that described input gesture meets first condition, processing unit determines that described input gesture is described the first operation;
Described in the time that described input gesture meets second condition, processing unit determines that described input gesture is described the second operation,
Wherein, described first condition is different from described second condition.
14. electronic equipments as claimed in claim 13, wherein, described second condition at least comprises that the first operating body in described input gesture is away from the second operating body in described input gesture,
Wherein, described the second image acquisition units comprises the two number of sub images collecting units that are placed in respectively the described Wearable glasses left and right sides, identifies described input gesture by described the second image acquisition units.
CN201210589682.5A 2012-12-28 2012-12-28 Processing method and electronic equipment Active CN103905763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210589682.5A CN103905763B (en) 2012-12-28 2012-12-28 Processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210589682.5A CN103905763B (en) 2012-12-28 2012-12-28 Processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN103905763A true CN103905763A (en) 2014-07-02
CN103905763B CN103905763B (en) 2019-03-29

Family

ID=50996888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210589682.5A Active CN103905763B (en) 2012-12-28 2012-12-28 Processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN103905763B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227827A (en) * 2015-07-09 2016-01-06 北京君正集成电路股份有限公司 A kind of photographic method of intelligent glasses and intelligent glasses
CN105554384A (en) * 2015-12-17 2016-05-04 上海青橙实业有限公司 Wearable apparatus and shooting method
CN106331491A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Photographing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999023524A1 (en) * 1997-10-30 1999-05-14 The Microoptical Corporation Eyeglass interface system
CN1691015A (en) * 2004-04-23 2005-11-02 奥林巴斯株式会社 Information management apparatus and information management method
CN1831812A (en) * 2005-03-10 2006-09-13 株式会社东芝 Document managing apparatus
CN101145161A (en) * 2006-09-14 2008-03-19 三星电子株式会社 Apparatus and method of composing web document and apparatus of setting web document arrangement
CN101702752A (en) * 2009-11-13 2010-05-05 天津三星光电子有限公司 Method for realizing pre-shooting function of digital camera
WO2011106798A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
CN102798986A (en) * 2012-06-13 2012-11-28 南京物联传感技术有限公司 Intelligent glasses and working method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999023524A1 (en) * 1997-10-30 1999-05-14 The Microoptical Corporation Eyeglass interface system
CN1691015A (en) * 2004-04-23 2005-11-02 奥林巴斯株式会社 Information management apparatus and information management method
CN1831812A (en) * 2005-03-10 2006-09-13 株式会社东芝 Document managing apparatus
CN101145161A (en) * 2006-09-14 2008-03-19 三星电子株式会社 Apparatus and method of composing web document and apparatus of setting web document arrangement
CN101702752A (en) * 2009-11-13 2010-05-05 天津三星光电子有限公司 Method for realizing pre-shooting function of digital camera
WO2011106798A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
CN102798986A (en) * 2012-06-13 2012-11-28 南京物联传感技术有限公司 Intelligent glasses and working method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227827A (en) * 2015-07-09 2016-01-06 北京君正集成电路股份有限公司 A kind of photographic method of intelligent glasses and intelligent glasses
CN105554384A (en) * 2015-12-17 2016-05-04 上海青橙实业有限公司 Wearable apparatus and shooting method
CN106331491A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Photographing method and device

Also Published As

Publication number Publication date
CN103905763B (en) 2019-03-29

Similar Documents

Publication Publication Date Title
US10020024B2 (en) Smart gallery and automatic music video creation from a set of photos
EP3226537B1 (en) Mobile terminal and method for controlling the same
WO2019120068A1 (en) Thumbnail display control method and mobile terminal
CN107770312A (en) Method for information display, device and terminal
CN105874780A (en) Method and apparatus for generating a text color for a group of images
CN112887584A (en) Video shooting method and electronic equipment
CN109862267A (en) A kind of image pickup method and terminal device
WO2014176139A1 (en) Automatic music video creation from a set of photos
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN101506869A (en) Method and apparatus for controlling a display in an electronic device
CN111597370B (en) Shooting method and electronic equipment
CN109471692A (en) A kind of display control method and terminal device
CN106980370A (en) Wearable intelligent glasses with interaction more
CN108108079A (en) A kind of icon display processing method and mobile terminal
CN103905763A (en) Processing method and electronic equipment
CN113207038B (en) Video processing method, video processing device and electronic equipment
WO2024055797A1 (en) Method for capturing images in video, and electronic device
WO2023035921A1 (en) Method for image snapshot in video recording, and electronic device
WO2022262475A1 (en) Image capture method, graphical user interface, and electronic device
US11451743B2 (en) Control of image output
CN113596329A (en) Photographing method and photographing apparatus
CN103581512B (en) Shooting device and method
CN110519319B (en) Method and device for splitting partitions
TWI309750B (en) A method for capturing and browsing image data synchronously
CN115484390B (en) Video shooting method and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant