SUMMARY: Patent Portfolio covering Closed Captioning, Analytics and Tags
The video entertainment landscape is shifting beneath us, with more viewers moving from traditional TV to digital video every day. The old-school system, where you knew your content programming months ahead, is gone. Ephemerality is now the zeitgeist.
Like will-o'-the-wisps, hot videos appear and vanish in a matter of hours and advertising, the life-blood of the entertainment industry, needs a way to provide relevant ads to the trending meme.
Samir Abed has created such a system using closed captions and social media: The new Social Media Video Referral system. The system uses a cloud-based closed captioning database that is updated in real-time and indexed to social media tags. With this combination of data sources, users and advertisers can immediately find videos relevant to trending social media mentions/hashtags.
Neopatents MARKET ANALYSIS:
5 Factors for a Rapidly Expanding Closed Caption Advertising Market
3. Social media is emerging as the new search engine
People may be spending less time with social media, but they still use it to find videos. Hashtags and social media mentions are growing as the search mechanism for netizens to find content, but this is really a referral platform. Once Google switched from a subject taxonomy system (like the Dewey Decimal system, if you remember that) to their PageRank algorithm, searching has continued mutating to the point where social tagging (hashtags/mentions), not subject matter, are the only search criteria.
4. Boom in digital video + rise of social media hashtags = a new referral system
The explosive growth in digital video demand along with the rise of social media as a content referral platform creates an opportunity for a technology that can more easily connect viewers with desired digital video. To this end, Samir Abed proposes his Social Media Video Referral System. Is there technological support for Samir’s system? His system requires captions to work, which YouTube has taken care of (see #1 above) and digital video advertising to generate revenue.
Digital video ad spending doubled from 2.4 percent of total ad budget in 2013 to 4.4 percent in Q2 2015.
Source: https://contently.com/strategist/2015/07/06/the-explosive-growth-of-online-video-in-5-charts/
This increase in ad spending is due, in part, to:
- new video platforms (Snapchat, Yahoo)
- proliferation of original video content on existing platforms (Hulu, YouTube, Amazon)
Overview – Patent Context Summary
In order to showcase the closed captioning patent field, the US Patent database was analyzed and sorted by total patent ownership to determine the largest players in the market.
Closed Captioning for Pictorial Communications
The US Patent database was analyzed for patents and published applications directed to inventions for closed captioning for television and other pictorial communications. Because the closed captioning technology field, which began in the early 1970’s, has numerous patents, only the records for the last five years (2011-2016) were summarized to show the more recent participants in the field.
The top 6 companies control approximately 32% of the recent patents, with LG as the leader. With a combined 86 patents and applications, LG’s portfolio accounts for about 10% of the total patent assets for this time period. Sony owns the second largest share of closed captioning patent assets with 52. Google, AT&T, Echostar and Microsoft are also large players in this space.
Databases of Closed Captioning Content
The present portfolio is also directed toward the use of caption databases to retrieve relevant video. The top 6 assignees in this category have 24% of patents and applications for all years. Microsoft leads the field with 73 patents and publications, followed by IBM, Samsung, AT&T, Philips and Sony.
Closed captioning with social media
The use of closed captioning to find relevant social media is a cutting-edge aspect of the present portfolio. A total of 121 patents and patent applications suggest the use of captions in social media. The top six assignees own 31% of this category. Google leads the field with 11 patents, followed by ROVI Guides, MobiTV, RoundBox, Microsoft and Apple.
SUMMARY OF INVENTIONS
The following Invention Abstracts are from the public patents in Mr. Samir Abed's portfolio. The full description of the patents and patent applications are available by selecting the document title.
1.) US Pat. No. 8,424,052:
Systems and methods for automated extraction of closed captions in real time or near real-time and tagging of streaming data for advertisements
System and methods for finding and accessing desired audio content from audio content sources, including means and methods for extracting captions from a broadcast; aggregating the captions in a database; indexing the database content; searching the captions for a mention of at least one target; analyzing the results for desired content; indexing into the database to extract the desired content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
Inventors:
Abed; Samir (Chapel Hill, NC)
Assignee:
Filed:
December 14, 2010
Issued:
April 16, 2013
Claims:
21
8424052
1.
A method for finding and accessing desired content from audio and video content sources, the method steps comprising: (10) (12)
providing a server with a processing unit, the server is constructed, configured and coupled to enable communication over a network;
the server provides for user interconnection with the server over the network using a computing device positioned remotely from the server;
the server and computing device running non-transitory computer-readable storage media with executable programs stored thereon;
the computing device monitoring a broadcast;
the executable programs:
extracting captions from a broadcast in near real-time;
aggregating the captions in a database in a cloud computing system;
indexing the database content;
searching the captions for a mention of at least one target;
analyzing the results for desired content;
and indexing into the database to extract the desired content;
and thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
2.
The method of claim 1, wherein the audio content is any voice broadcast. (0)
3.
The method of claim 1, wherein the extraction capability is embedded in a device selected from the group consisting of PC, TV, PVR, DVR, SOC and mobile device. (0)
4.
The method of claim 1, further including the step of adding at least one Advertisement Tag Code to electronically mark an advertisement or target content. (4)
5.
The method of claim 4, wherein the at least one Advertisement Tag Code is encrypted. (0)
6.
The method of claim 4, wherein the at least one Advertisement Tag Code is visible or invisible. (0)
7.
The method of claim 4, wherein the at least one Advertisement Tag Code is at the beginning and at the end of the advertisement. (0)
8.
The method of claim 4, wherein the Advertisement Tag Code is encoded in a method selected from the group consisting of VBI or closed-caption stream or live Internet video. (0)
9.
The method of claim 1, further including the step of creating captions for un-captioned audio content. (0)
10.
The method of claim 1, wherein a local machine is provided running a non-transitory computer-readable storage medium with an executable program stored thereon, the executable programs extracting the captions. (0)
11.
The method of claim 1, wherein the captions are aggregated in one location. (0)
12.
The method of claim 1, wherein the analysis includes determining the earned media and paid media of the at least one target. (0)
13.
The method of claim 1, wherein the analysis includes categorizing the at least one target mentions into positive, negative, neutral and unknown categories. (0)
14.
The method of claim 1, wherein the analysis includes linking the target mention results to other social media and digital media target mention results. (0)
15.
The method of claim 1, wherein the retrieved captions are retrieved from media selected from the group consisting of audio and video media. (0)
16.
A system for extracting captions in near real-time, comprising: (1) (8)
a server with a processing unit, a database on a cloud computing system, and a local machine tuned to at least one broadcast;
the server constructed, configured and coupled to enable communication over a network;
the server and database and the server and local machine interconnected over the network;
the server and local machine running non-transitory computer-readable storage media with executable programs stored thereon;
the executable programs of the local machine extracting captions from the broadcast in near real-time and transmitting them to the server;
the server executable programs storing, indexing and retrieving the captions in and from the database;
the server executable programs aggregating the captions on the cloud computing system;
and thereby providing a system for local extraction of audio captions from a broadcast.
17.
The method of claim 16, wherein the local machine's executable programs are a system on a chip application. (0)
18.
A method for extracting voice broadcasts, the method steps comprising: (0) (13)
providing a database on a cloud computing system and a server with a processing unit, the server is constructed, configured and coupled to enable communication over a network;
the server provides for user interconnection with the server over the network using a computing device positioned remotely from the server;
the server and computing device running non-transitory computer-readable storage media with executable programs stored thereon;
the executable programs;
the computing device monitoring a voice broadcast;
the executable programs:
extracting captions from the voice broadcast in near real-time;
aggregating the captions in the database in the cloud computing system;
indexing the database content;
searching the captions for a mention of at least one target;
and analyzing the results for desired content;
indexing into the database to extract the desired content;
and thereby providing a method for quickly finding and accessing desired voice broadcasts from a large number of sources.
19.
A method for managing communication through mass media, the method steps comprising: (0) (8)
monitoring for target mentions;
aggregating the target mentions in a database in a cloud computing system;
categorizing the target mentions into positive, negative, neutral and unknown categories;
linking the target mentions in real-time to determine whether such mentions trigger a spike in social media and digital media;
visualizing the results and analyzing for trends;
responding to the media with interest with measured response based on the results;
measuring the impact of the response;
and thereby managing communication through mass media to increase mentions of a target.
20.
A method for preventing invalid captions from being submitted to a closed caption database, the method steps comprising: (0) (4)
authenticating linked devices;
extracting captions from authenticated linked devices;
aggregating the captions in a database in a cloud computing system;
and thus preventing the submission of captions that are not part of the broadcast.
21.
A method for extracting complete captions from fragmented audio captions, the method steps comprising: (0) (6)
extracting caption fragments from a broadcast;
aggregating the caption fragments in a database in a cloud computing system;
correctly sequencing the caption fragments by matching fragment overlaps;
eliminating redundancies;
assembling the caption fragments into a single transcript;
and thereby providing a more complete captions transcript from fragmented captions transcripts.
7806 Hwy 751, Suite 130
|
Durham, NC 27713 USA
|
+1 919-802-1124
|
admin@neopatents.com
The patent is directed to finding and accessing desired audio content from audio content sources, including means and methods for extracting captions from a broadcast; aggregating the captions in a database; indexing the database content; searching the captions for a mention of at least one target; analyzing the results for desired content; indexing into the database to extract the desired content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
The significant advantage of this patent over other technology is the cloud computing aspect of the invention, which allows for storage, aggregation, and analysis of these captions through a cloud computing system. The system possesses the ability to link target mentions in the captions to real-time social media trends, respond to these trends based on analysis, and measure the impact of a response.
2.) US Patent No. 8,763,067:
Systems and methods for automated extraction of closed captions in real time or near real-time and tagging of streaming data for advertisements.
System and methods for finding and accessing desired audio content from audio content sources, including means and methods for extracting captions from a broadcast; aggregating the captions in a database; indexing the database content; searching the captions for a mention of at least one target; analyzing the results for desired content; indexing into the database to extract the desired content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
Inventors:
Abed; Samir (Chapel Hill, NC)
Assignee:
Filed:
March 15, 2013
Issued:
June 24, 2014
Claims:
21
8763067
1.
A method for extracting complete captions from fragmented audio captions, the method steps comprising: (9) (5)
extracting caption fragments from a broadcast;
correctly sequencing the caption fragments by matching fragment overlaps;
eliminating redundancies;
assembling the caption fragments into a single transcript;
thereby providing a more complete captions transcript from fragmented captions transcripts.
2.
The method of claim 1, wherein the steps are repeated for at least two broadcasts. (1)
3.
The method of claim 2, further comprising the steps of: (3) (6)
aggregating the transcripts in a database;
indexing the database content;
searching the transcripts for a mention of at least one target, thereby creating target mention results, wherein the at least one target includes at least one keyword, at least one concept, and combinations thereof;
analyzing the target mention results for desired content;
indexing into the database to extract the desired content;
thereby providing a method for quickly finding and accessing desired content from the broadcasts.
13.
The method of claim 3, wherein the step of analyzing further includes determining an earned media and a paid media of the at least one target. (0)
14.
The method of claim 3, wherein the step of analyzing further includes categorizing the at least one target mentions into positive, negative, neutral and unknown categories. (0)
15.
The method of claim 3, wherein the step of analyzing further includes linking the target mention results to other social media and digital media target mention results. (1)
16.
The method of claim 15, wherein the step of analyzing further includes metrics comparing the target mention results to other social media and digital media target mention results. (1)
17.
The method of claim 16, wherein the step of analyzing further includes a predetermined time. (0)
4.
The method of claim 1, wherein the broadcast is any voice broadcast. (0)
5.
The method of claim 1, wherein the extraction capability is embedded in a device selected from the group consisting of a personal computer (PC), a television (TV), a personal video recorder (PVR), a digital video recorder (DVR), a system on a chip (SOC) and a mobile device. (1)
21.
The method of claim 5, further including: (0) (1)
authenticating the device.
6.
The method of claim 1, further including the step of adding at least one Advertisement Tag Code to electronically mark an advertisement or target content. (4)
7.
The method of claim 6, wherein the at least one Advertisement Tag Code is encrypted. (0)
8.
The method of claim 6, wherein the at least one Advertisement Tag Code is visible or invisible. (0)
9.
The method of claim 6, wherein the at least one Advertisement Tag Code is at the beginning and at the end of the advertisement. (0)
10.
The method of claim 6, wherein the Advertisement Tag Code is encoded in a method selected from the group consisting of VBI or closed-caption stream or live Internet video. (0)
11.
The method of claim 1, further including the step of creating captions for un-captioned audio content. (0)
12.
The method of claim 1, wherein a local machine is provided running a non-transitory computer-readable storage medium with at least one executable program stored thereon; (0) (1)
the at least one executable program performing the step of extracting the caption fragments.
18.
The method of claim 1, wherein the broadcast is a video broadcast. (0)
19.
The method of claim 1, wherein the step of extracting occurs in near real-time; (0) (1)
and further including the step of storing, indexing and retrieving the caption fragments in and from a database.
20.
The method of claim 1, further comprising the steps of: (0) (7)
monitoring the transcripts for target mentions;
categorizing the target mentions into positive, negative, neutral and unknown categories;
linking the target mentions in real-time to determine whether the target mentions trigger a spike in social media or digital media;
visualizing the target mention results and analyzing the target mention results for trends;
responding to the social media or the digital media with interest with measured response based on the target mention results;
measuring an impact of the measured response;
thereby managing communication through mass media to increase the target mentions.
7806 Hwy 751, Suite 130
|
Durham, NC 27713 USA
|
+1 919-802-1124
|
admin@neopatents.com
Continuation-in-part of (1). Additional developments include the analysis of multiple transcripts for target mentions and analysis.
System and methods for finding and accessing desired audio content from audio content sources, including means and methods for extracting captions from a broadcast; aggregating the captions in a database; indexing the database content; searching the captions for a mention of at least one target; analyzing the results for desired content; indexing into the database to extract the desired content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
3.) US Patent No. 9,055,344:
Systems and methods for automated extraction of closed captions in real time or near real-time and tagging of streaming data for advertisements
System and methods for finding and accessing desired audio content from audio content sources, including means and methods for extracting captions from a broadcast; aggregating the captions in a database; indexing the database content; searching the captions for a mention of at least one target; analyzing the results for desired content; indexing into the database to extract the desired content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
Inventors:
Abed; Samir (Chapel Hill, NC)
Assignee:
Filed:
June 09, 2014
Issued:
June 09, 2015
Claims:
15
9055344
1.
A method for finding and accessing target content from at least one audio or video broadcast, comprising: (3) (9)
providing at least one device and a cloud-based computing system including at least one server, wherein the at least one device and the cloud-based computing system are configured to communicate over at least one network;
the device extracting captions of at least one broadcast;
the cloud-based computing system:
receiving, aggregating and indexing the captions from the device;
searching the captions for at least one target relating to the target content, thereby creating target captions, wherein each target includes at least one keyword, at least one concept or combinations thereof;
analyzing and indexing the target captions;
comparing at least one target or target caption to social or digital media content;
and linking the social or digital media content to at least one broadcast, at least one target caption or both;
wherein the step of comparing only considers the social or digital media content over a time period.
2.
The method of claim 1, wherein the step of comparing only considers the social or digital media content related to a geographic area. (0)
3.
The method of claim 1, further comprising stitching at least two target captions into at least one segment. (3)
4.
The method of claim 3, further comprising: (0) (2)
delivering at least one segment to a device;
receiving feedback from the device relating to the at least one segment, the feedback being a ranking, an augmentation, a correction, or a combination thereof.
5.
The method of claim 3, further comprising: (1) (2)
posting at least one segment online;
monitoring any online activity related to the segment posting.
6.
The method of claim 5, wherein any online activity includes tweets and retweets and wherein the retweets are weighted over the tweets. (0)
7.
The method of claim 3, wherein the step of stitching is based on rules established by an application or a user. (0)
8.
The method of claim 1, further comprising: (0) (3)
sequencing the captions by matching overlaps;
eliminating redundancies;
assembling the sequenced captions into a single caption.
9.
A system for extracting captions, comprising: (2) (8)
a cloud-based computing system including at least one server communicating over at least one network with and at least one device;
the server and the device each including non-transitory computer-readable storage media having at least one executable program stored thereon;
the device receiving at least one broadcast;
at least one executable program of at least one device configured to extract captions from at least one broadcast and transmit the captions to at least one server;
at least one executable program of at least one server configured to store, index and retrieve the captions;
at least one executable program of at least one server configured to search the captions for at least one target for creating target captions, wherein each target includes at least one keyword, at least one concept or combinations thereof;
at least one executable program of at least one server configured to analyze and index the target captions;
wherein the at least one device is a personal computer, a television, a personal video recorder, a digital video recorder, a system on a chip, a mobile device, or combinations thereof.
10.
The system of claim 9, further comprising at least one executable program of at least one server configured to stitch at least two captions or target captions into a segment. (3)
11.
The system of claim 10, further comprising: (0) (2)
at least one executable program of at least one server configured to deliver the segment;
at least one executable program of at least one server configured to receive feedback relating to viewing of the delivered segment, the feedback being a ranking, an augmentation, a correction, or a combination thereof.
12.
The system of claim 10, further comprising: (1) (2)
at least one executable program of at least one server configured to post the segment online;
at least one executable program of at least one server configured to monitor any online activity related to the segment post.
13.
The system of claim 12, wherein any online activity includes tweets and retweets and wherein the retweets are weighted over the tweets. (0)
14.
The system of claim 10, wherein stitching is based on rules stored on the storage media of the server or the device. (0)
15.
The system of claim 9, further comprising: (0) (2)
comparing at least one target or target caption to social or digital media content;
linking the social or digital media content to at least one broadcast, at least one target caption or a combination thereof.
7806 Hwy 751, Suite 130
|
Durham, NC 27713 USA
|
+1 919-802-1124
|
admin@neopatents.com
Continuation of (2). Additional developments include geographic matching of broadcast transcripts to social or digital media trends, stitching multiple target captions into one segment, and posting these segments online. Segments shared online are modified based on users’ preset rules for sequencing and eliminating redundancies, and are monitored for online activity, including tweets and retweets.
System and methods for finding and accessing desired audio content from audio content sources, including means and methods for extracting captions from a broadcast; aggregating the captions in a database; indexing the database content; searching the captions for a mention of at least one target; analyzing the results for desired content; indexing into the database to extract the desired content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
4.) US Patent No. 9,282,350:
Systems and methods for automated extraction of closed captions in real time or near real-time and tagging of streaming data for advertisements
System and methods for finding and accessing desired audio content from audio content sources, including means and methods for extracting captions from a broadcast; aggregating the captions in a database; indexing the database content; searching the captions for a mention of at least one target; analyzing the results for desired content; indexing into the database to extract the desired content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
Inventors:
Abed; Samir (Chapel Hill, NC)
Assignee:
Filed:
May 13, 2015
Issued:
March 08, 2016
Claims:
20
9282350
1.
A method for finding and accessing target content from at least one audio or video broadcast, comprising the steps of: (5) (9)
providing at least one device, a summarizer, and a cloud-based computing system including at least one server, wherein the at least one device and the cloud-based computing system are configured to communicate over at least one network;
the device extracting captions of at least one broadcast;
the cloud-based computing system:
receiving the captions from the device;
searching the captions for at least one keyword;
and providing an alert based upon the at least one keyword;
providing a full transcript or a partial transcript of the at least one broadcast;
and the summarizer summarizing the full transcript or the partial transcript of the at least one broadcast;
wherein the alert includes an identification of the at least one broadcast.
2.
The method of claim 1, wherein the step of providing the full transcript or the partial transcript of the at least one broadcast is performed upon receiving a payment. (0)
3.
The method of claim 1, wherein the alert is provided via email and/or SMS messaging. (0)
4.
The method of claim 1, wherein the alert is provided in an audio format, wherein the audio format includes at least a portion of the at least one broadcast. (0)
5.
The method of claim 1, wherein the alert is provided in a video format, wherein the video format includes at least a portion of the at least one broadcast. (0)
6.
The method of claim 1, further comprising a facility detecting and handling duplicate entries in the at least one broadcast. (2)
7.
The method of claim 6, further comprising the step of the facility detecting garbage and offensive words, wherein the garbage includes garbled captions. (0)
8.
The method of claim 6, further comprising the steps of the facility: (0) (1)
standardizing and detecting recording times of the at least one broadcast across national and international boundaries and retrieving and presenting results for queries into a captions database spanning multiple channels and multiple time-zones.
9.
A method for finding and accessing target content from at least one audio or video broadcast, comprising the steps of: (2) (7)
providing at least one device and a cloud-based computing system including at least one server, wherein the at least one device and the cloud-based computing system are configured to communicate over at least one network;
the device extracting captions of at least one broadcast in real-time;
the cloud-based computing system:
receiving, aggregating, and indexing the captions from the device;
searching the captions for at least one target relating to the target content, thereby creating target captions, wherein each target includes at least one keyword, at least one concept or combinations thereof;
analyzing and indexing the target captions;
adding at least one advertisement tag code for electronically marking at least one advertisement of the at least one broadcast.
10.
The method of claim 9, further comprising the step of creating additional captions for un-captioned content of the at least one broadcast. (0)
11.
The method of claim 9, wherein the step of analyzing and indexing the target captions includes categorizing at least one target caption as positive, negative, neutral or unknown. (0)
12.
A method for finding and accessing target content from at least one audio or video broadcast, comprising the steps of: (7) (4)
extracting at least one advertisement tag code (ATC) from the at least one broadcast;
adding the at least one ATC to an ATC analytics database, wherein the at least one ATC electronically marks an advertisement associated with the at least one ATC;
wherein the at least one ATC includes information associated with an advertisement label, an intended advertisement market target, a demographic target, a television program, and/or a time of advertisement;
wherein the at least one ATC is placed in a closed caption stream of a broadcast TV channel or a live Internet video stream.
13.
The method of claim 12, further comprising the step of analyzing the at least one ATC, wherein the step of analyzing the at least one ATC includes determining how many times an entity has advertised on a specific channel. (0)
14.
The method of claim 12, further comprising the step of analyzing the at least one ATC, wherein the step of analyzing the at least one ATC includes determining which entities have advertised on a specific channel. (0)
15.
The method of claim 12, further comprising the step of analyzing the at least one ATC, wherein the step of analyzing the at least one ATC includes determining how many times an entity has advertised during a specific show. (0)
16.
The method of claim 12, wherein the at least one ATC includes at least one of a region, a channel, a time slot, a day, and a length of time for the at least one broadcast. (0)
17.
The method of claim 12, wherein the at least one ATC includes at least one link to a web site. (1)
18.
The method of claim 17, further comprising the step of determining traffic on the website as a result of the at least one link to the website in the at least one ATC. (0)
19.
The method of claim 12, wherein the at least one ATC is encrypted. (0)
20.
The method of claim 12, wherein the at least one ATC is a unique image. (0)
7806 Hwy 751, Suite 130
|
Durham, NC 27713 USA
|
+1 919-802-1124
|
admin@neopatents.com
Continuation of (3). Additional developments include the inclusion of summaries to the extracted broadcast transcripts as well as SMS or email alerts that can notify as user of keywords found within transcripts. Further developments to the advertisement tag code (ATC) extraction and analyzation technology include indexing occurrences of ATCs and the demographic, time slot, and number of time an ATC has been broadcast.
System and methods for finding and accessing desired audio content from audio content sources, including means and methods for extracting captions from a broadcast; aggregating the captions in a database; indexing the database content; searching the captions for a mention of at least one target; analyzing the results for desired content; indexing into the database to extract the desired content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
5.) US Patent No. 9,602,855:
Systems and methods for automated extraction of closed captions in real time or near real-time and tagging of streaming data for advertisements
System and methods for finding and accessing target content from audio and video content sources, including means and methods for extracting captions from audio and video content sources; searching the captions for a mention of at least one target; extracting audio and video segments relating to the at least one target; delivering extracted audio and video segments to a user device; analyzing the results for target content; thereby providing a method for quickly finding and accessing desired audio and video content from a large number of sources.
Inventors:
Abed; Samir (Chapel Hill, NC)
Assignee:
Filed:
N/A
Issued:
March 21, 2017
Claims:
18
9602855
1.
A method for finding and accessing target content from audio and video content sources, comprising: (4) (6)
providing at least one device and a cloud-based platform, wherein the cloud-based platform comprises at least one server and at least one database and wherein the at least one device communicates with the cloud-based platform over at least one network;
the at least one device extracting captions of the audio and video content sources;
the cloud-based platform receiving extracted captions from the at least one device;
the cloud-based platform searching the extracted captions for at least one keyword relating to the target content;
the cloud-based platform extracting audio and video segments relevant to the target content from the audio and video sources;
and the cloud-based platform delivering extracted audio and video segments to the at least one device.
2.
The method of claim 1, further comprising aggregating, storing, indexing, searching and retrieving the extracted captions in and from the at least one database. (0)
3.
The method of claim 1, further comprising authorizing and authenticating the at last one device. (1)
4.
The method of claim 3, wherein the authorizing and authenticating are provided by private keys, shared keys, steganography and any other method including a secret code, wherein the secret code is sent from the cloud-based platform, and wherein the secret code is included in extracted captions. (0)
5.
The method of claim 1, further comprising assembling the extracted audio and video segments into a single or multiple segments based on established rules. (0)
6.
The method of claim 1, further comprising sharing extracted audio and video segments on at least one sharing platform selecting from the group consisting of iTunes, Twitter, Facebook and other social media or digital media platforms. (1)
7.
The method of claim 6, further comprising receiving feedback and/or rating of the extracted audio and video segments via the at least one sharing platform. (1)
8.
The method of claim 7, further comprising providing analytics based on the extracted audio and video segments and the feedback and/or rating thereof in real-time or near real-time. (1)
9.
The method of claim 8, wherein the analytics comprises determining earned media and paid media of the extracted audio and video segments and categorizing the extracted audio and video segments into positive, negative, neutral and unknown categories. (0)
10.
A method for finding and accessing target content from at least one audio or video, comprising: (3) (7)
providing at least one device and a cloud-based computing system, wherein the at least one device communicates with the cloud-based computing system over at least one network;
the at least one device extracting captions of at least one audio or video;
the cloud-based computing system receiving extracted captions from the at least one device;
the cloud-based computing system searching the extracted captions for target content based on user profile and preferences;
the cloud-based computing system extracting audio or video segments relevant to the target content;
the cloud-based computing system delivering at least one alert regarding extracted audio or video segments to the at least one device;
and formatting the extracted captions to a more human readable text in free-form format.
11.
The method of claim 10, wherein the user profile and preferences comprise words of interest, modality of alerts, summarization levels of the extracted captions, and system housekeeping. (0)
12.
The method of claim 10, wherein the at least one alert is via Short Message Service (SMS) messaging, email alerts, audio alerts, video alerts, or any combination thereof. (0)
13.
The method of claim 10, further comprising determining the target content being mentioned in search engine results, social media sites and any other point of interest for a predetermined time period. (2)
14.
The method of claim 13, further comprising comparing and analyzing between the extracted audio or video segments and the social media activity and/or response during the predetermined time period. (0)
15.
The method of claim 13, wherein the social media sites comprise Facebook, Twitter and other web-based sites for groups. (0)
16.
A method for analyzing an advertisement campaign, comprising: (2) (5)
generating at least one advertisement tag code (ATC) for the advertisement campaign;
marking at least one audio or video related to the advertisement campaign with the at least one ATC;
extracting audio or video segments from the at least one audio or video related to the advertisement campaign;
monitoring social media activities related to the advertisement campaign, thereby creating social media data;
and analyzing effectiveness of an advertisement campaign based on social media data correlated in time with extracted audio or video segments.
17.
The method of claim 16, wherein the at least one ATC comprises information associated with predetermined factors selecting from the group consisting of an advertisement label, an intended advertisement market target, a demographic target, a television program, a time of advertisement, and a code; (0) (1)
wherein the code is operable to link the at least one audio or video to the advertisement campaign.
18.
The method of claim 16, further comprising performing surveys related to the advertisement campaign, thereby creating survey data; (0) (1)
and analyzing the effectiveness of the advertisement campaign based on the social media and the survey data correlated in time with extracted audio or video segments.
7806 Hwy 751, Suite 130
|
Durham, NC 27713 USA
|
+1 919-802-1124
|
admin@neopatents.com
Continuation of (4). Additional developments include the extraction of both audio and video from the analyzed content. These audio and video segments may be shared on social media or other digital platforms and allow for feedback on the segments. Additionally, the captions may be formatted to a more readable format. Lastly, detailed is a method of tracking an advertisement campaign through the extraction of ATCs and the use of surveys to gauge its effectiveness.
6.)
US Patent Application No. 15/456,155:
Systems and methods for automated extraction of closed captions in real time or near real-time and tagging of streaming data for advertisements