• 1
  • 2
  • 3
  • 4
  • 5

7 Ways to Group Video Chat While You Wait for That FaceTime Update Bring up to 50 people into the conversation with these group video chat apps from Facebook, Snapchat, Google, and more. https://ift.tt/2BdqMkg

August 14, 2018

from Pradodesign 7 Ways to Group Video Chat While You Wait for That FaceTime Update Bring up to 50 people into the conversation with these group video chat apps from Facebook, Snapchat, Google, and more. https://ift.tt/2BdqMkg https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

This robot maintains tender, unnerving eye contact Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected. The Simulative Emotional Experience Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression. It doesn’t sound like much, but it’s pretty complex to execute well, which despite a few glitches SEER managed to do. At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time. In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side. Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine. That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems. This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together. https://ift.tt/2MOg1qa

August 14, 2018

from Pradodesign This robot maintains tender, unnerving eye contact

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Experience Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which despite a few glitches SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

https://ift.tt/2MOg1qa https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

Finding the Goldilocks zone for applied AI Ivy Nguyen Contributor Share on Twitter Ivy Nguyen is an associate at Zetta Venture Partners. More posts by this contributor Data is not the new oil Stop Throwing Tech At Flint. It Won’t Help. While Elon Musk and Mark Zuckerberg debate the dangers of artificial general intelligence, startups applying AI to more narrowly defined problems such as accelerating the performance of sales teams and improving the operating efficiency of manufacturing lines are building billion-dollar businesses. Narrowly defining a problem, however, is only the first step to finding valuable business applications of AI. To find the right opportunity around which to build an AI business, startups must apply the “Goldilocks principle” in several different dimensions to find the sweet spot that is “just right” to begin — not too far in one dimension, not too far in another. Here are some ways for aspiring startup founders to thread the needle with their AI strategy, based on what we’ve learned from working with thousands of AI startups. “Just right” prediction time horizons Unlike pre-intelligence software, AI responds to the environment in which they operate; algorithms take in data and return an answer or prediction. Depending on the application, that prediction may describe an outcome in the near term, such as tomorrow’s weather, or an outcome many years in the future, such as whether a patient will develop cancer in 20 years. The time horizon of the algorithm’s prediction is critical to its usefulness and to whether it offers an opportunity to build defensibility. Algorithms making predictions with long time horizons are difficult to evaluate and improve. For example, an algorithm may use the schedule of a contractor’s previous projects to predict that a particular construction project will fall six months behind schedule and go over budget by 20 percent. Until this new project is completed, the algorithm designer and end user can only tell whether the prediction is directionally correct — that is, whether the project is falling behind or costs are higher. Even when the final project numbers end up very close to the predicted numbers, it will be difficult to complete the feedback loop and positively reinforce the algorithm. Many factors may influence complex systems like a construction project, making it difficult to A/B test the prediction to tease out the input variables from unknown confounding factors. The more complex the system, the longer it may take the algorithm to complete a reinforcement cycle, and the more difficult it becomes to precisely train the algorithm. While many enterprise customers are open to piloting AI solutions, startups must be able to validate the algorithm’s performance in order to complete the sale. The most convincing way to validate an algorithm is by using the customer’s real-time data, but this approach may be difficult to achieve during a pilot. If the startup does get access to the customer’s data, the prediction time horizon should be short enough that the algorithm can be validated during the pilot period. For most of AI history, slow computational speeds have severely limited the scope of applied AI. Historic data, if it’s available, can serve as a stopgap to train an algorithm and temporarily validate it via backtesting. Training an algorithm making long time horizon predictions on historic data is risky because processes and environments are more likely to have changed the further back you dig into historic records, making historic data sets less descriptive of present-day conditions. In other cases, while the historic data describing outcomes exists for you to train an algorithm, it may not capture the input variable under consideration. In the construction example, that could mean that you found out that sites using blue safety hats are more likely to complete projects on time, but since that hat color wasn’t previously helpful in managing projects, that information wasn’t recorded in the archival records. This data must be captured from scratch, which further delays your time to market. Instead of making singular “hero” predictions with long time horizons, AI startups should build multiple algorithms making smaller, simpler predictions with short time horizons. Decomposing an environment into simpler subsystems or processes limits the number of inputs, making them easier to control for confounding factors. The BIM 360 Project IQ Team at Autodesk takes this small prediction approach to areas that contribute to construction project delays. Their models predict safety and score vendor and subcontractor quality/reliability, all of which can be measured while a project is ongoing. Shorter time horizons make it easier for the algorithm engineer to monitor its change in performance and take action to quickly improve it, instead of being limited to backtesting on historic data. The shorter the time horizon, the shorter the algorithm’s feedback loop will be. As each cycle through the feedback incrementally compounds the algorithm’s performance, shorter feedback loops are better for building defensibility. “Just right” actionability window Most algorithms model dynamic systems and return a prediction for a human to act on. Depending on how quickly the system is changing, the algorithm’s output may not remain valid for very long: the prediction may “decay” before the user can take action. In order to be useful to the end user, the algorithm must be designed to accommodate the limitations of computing and human speed. In a typical AI-human workflow, the human feeds input data into the algorithm, the algorithm runs calculations on that input data and returns an output that predicts a certain outcome or recommends a course of action; the human interprets that information to decide on a course of action, then takes action. The time it takes the algorithm to compute an answer and the time it takes for a human to act on the output are the two largest bottlenecks in this workflow. For most of AI history, slow computational speeds have severely limited the scope of applied AI. An algorithm’s prediction depends on the input data, and the input data represents a snapshot in time at the moment it was recorded. If the environment described by the data changes faster than the algorithm can compute the input data, by the time the algorithm completes its computations and returns a prediction, the prediction will only describe a moment in the past and will not be actionable. For example, the algorithm behind the music app Shazam may have needed several hours to identify a song after first “hearing” it using the computational power of a Windows 95 computer. The rise of cloud computing and the development of hardware specially optimized for AI computations has dramatically broadened the scope of areas where applied AI is actionable and affordable. While macro tech advancements can greatly advance applied AI, the algorithm is not totally held hostage to current limits of computation; reinforcement through training also can improve the algorithm’s response time. The more of the same example an algorithm encounters, the more quickly it can skip computations to arrive at a prediction. Thanks to advances in computation and reinforcement, today Shazam takes less than 15 seconds to identify a song. Automating the decision and action also could help users make use of predictions that decay too quickly to wait for humans to respond. Opsani is one such company using AI to make decisions that are too numerous and fast-moving for humans to make effectively. Unlike human DevOps, who can only move so fast to optimize performance based on recommendations from an algorithm, Opsani applies AI to both identify and automatically improve operations of applications and cloud infrastructure so its customers can enjoy dramatically better performance. Not all applications of AI can be completely automated, however, if the perceived risk is too high for end users to accept, or if regulations mandate that humans must approve the decision. “Just right” performance minimums Just like software startups launch when they have built a minimum viable product (MVP) in order to collect actionable feedback from initial customers, AI startups should launch when they reach the minimum algorithmic performance (MAP) required by early adopters, so that the algorithm can be trained on more diverse and fresh data sets and avoid becoming overfit to a training set. Most applications don’t require 100 percent accuracy to be valuable. For example, a fraud detection algorithm may only immediately catch five percent of fraud cases within 24 hours of when they occur, but human fraud investigators catch 15 percent of fraud cases after a month of analysis. In this case, the MAP is zero, because the fraud detection algorithm could serve as a first filter in order to reduce the number of cases the human investigators must process. The startup can go to market immediately in order to secure access to the large volume of fraud data used for training their algorithm. Over time, the algorithms’ accuracy will improve and reduce the burden on human investigators, freeing them to focus on the most complex cases. Startups building algorithms for zero or low MAP applications will be able to launch quickly, but may be continuously looking over their shoulder for copycats, if these copycats appear before the algorithm has reached a high level of performance. There’s no one-size-fits-all approach to moving an algorithm from the research lab to the market. Startups attacking low MAP problems also should watch out for problems that can be solved with near 100 percent accuracy with a very small training set, where the problem being modeled is relatively simple, with few dimensions to track and few possible variations in outcome. AI-powered contract processing is a good example of an application where the algorithm’s performance plateaus quickly. There are thousands of contract types, but most of them share key fields: the parties involved, the items of value being exchanged, time frame, etc. Specific document types like mortgage applications or rental agreements are highly standardized in order to comply with regulation. Across multiple startups, we have seen algorithms that automatically process these documents needing only a few hundred examples to train to an acceptable degree of accuracy before additional examples do little to improve the algorithm, making it easy for new entrants to match incumbents and earlier entrants in performance. AIs built for applications where human labor is inexpensive and able to easily achieve high accuracy may need to reach a higher MAP before they can find an early adopter. Tasks requiring fine motor skills, for example, have yet to be taken over by robots because human performance sets a very high MAP to overcome. When picking up an object, the AIs powering the robotic hand must gauge an object’s stiffness and weight with a high degree of accuracy, otherwise the hand will damage the object being handled. Humans can very accurately gauge these dimensions with almost no training. Startups attacking high MAP problems must invest more time and capital into acquiring enough data to reach MAP and launch. Threading the needle Narrow AI can demonstrate impressive gains in a wide range of applications — in the research lab. Building a business around a narrow AI application, on the other hand, requires a new playbook. This process is heavily dependent on the specific use case on all dimensions, and the performance of the algorithm is merely one starting point. There’s no one-size-fits-all approach to moving an algorithm from the research lab to the market, but we hope these ideas will provide a useful blueprint for you to begin. https://ift.tt/2KRDlS2

August 14, 2018

from Pradodesign Finding the Goldilocks zone for applied AI

Ivy Nguyen
Contributor

Share on Twitter

Ivy Nguyen is an associate at Zetta Venture Partners.

More posts by this contributor

Data is not the new oil

Stop Throwing Tech At Flint. It Won’t Help.

While Elon Musk and Mark Zuckerberg debate the dangers of artificial general intelligence, startups applying AI to more narrowly defined problems such as accelerating the performance of sales teams and improving the operating efficiency of manufacturing lines are building billion-dollar businesses. Narrowly defining a problem, however, is only the first step to finding valuable business applications of AI.

To find the right opportunity around which to build an AI business, startups must apply the “Goldilocks principle” in several different dimensions to find the sweet spot that is “just right” to begin — not too far in one dimension, not too far in another. Here are some ways for aspiring startup founders to thread the needle with their AI strategy, based on what we’ve learned from working with thousands of AI startups.

“Just right” prediction time horizons

Unlike pre-intelligence software, AI responds to the environment in which they operate; algorithms take in data and return an answer or prediction. Depending on the application, that prediction may describe an outcome in the near term, such as tomorrow’s weather, or an outcome many years in the future, such as whether a patient will develop cancer in 20 years. The time horizon of the algorithm’s prediction is critical to its usefulness and to whether it offers an opportunity to build defensibility.

Algorithms making predictions with long time horizons are difficult to evaluate and improve. For example, an algorithm may use the schedule of a contractor’s previous projects to predict that a particular construction project will fall six months behind schedule and go over budget by 20 percent. Until this new project is completed, the algorithm designer and end user can only tell whether the prediction is directionally correct — that is, whether the project is falling behind or costs are higher.

Even when the final project numbers end up very close to the predicted numbers, it will be difficult to complete the feedback loop and positively reinforce the algorithm. Many factors may influence complex systems like a construction project, making it difficult to A/B test the prediction to tease out the input variables from unknown confounding factors. The more complex the system, the longer it may take the algorithm to complete a reinforcement cycle, and the more difficult it becomes to precisely train the algorithm.

While many enterprise customers are open to piloting AI solutions, startups must be able to validate the algorithm’s performance in order to complete the sale. The most convincing way to validate an algorithm is by using the customer’s real-time data, but this approach may be difficult to achieve during a pilot. If the startup does get access to the customer’s data, the prediction time horizon should be short enough that the algorithm can be validated during the pilot period.

For most of AI history, slow computational speeds have severely limited the scope of applied AI.

Historic data, if it’s available, can serve as a stopgap to train an algorithm and temporarily validate it via backtesting. Training an algorithm making long time horizon predictions on historic data is risky because processes and environments are more likely to have changed the further back you dig into historic records, making historic data sets less descriptive of present-day conditions.

In other cases, while the historic data describing outcomes exists for you to train an algorithm, it may not capture the input variable under consideration. In the construction example, that could mean that you found out that sites using blue safety hats are more likely to complete projects on time, but since that hat color wasn’t previously helpful in managing projects, that information wasn’t recorded in the archival records. This data must be captured from scratch, which further delays your time to market.

Instead of making singular “hero” predictions with long time horizons, AI startups should build multiple algorithms making smaller, simpler predictions with short time horizons. Decomposing an environment into simpler subsystems or processes limits the number of inputs, making them easier to control for confounding factors. The BIM 360 Project IQ Team at Autodesk takes this small prediction approach to areas that contribute to construction project delays. Their models predict safety and score vendor and subcontractor quality/reliability, all of which can be measured while a project is ongoing.

Shorter time horizons make it easier for the algorithm engineer to monitor its change in performance and take action to quickly improve it, instead of being limited to backtesting on historic data. The shorter the time horizon, the shorter the algorithm’s feedback loop will be. As each cycle through the feedback incrementally compounds the algorithm’s performance, shorter feedback loops are better for building defensibility.

“Just right” actionability window

Most algorithms model dynamic systems and return a prediction for a human to act on. Depending on how quickly the system is changing, the algorithm’s output may not remain valid for very long: the prediction may “decay” before the user can take action. In order to be useful to the end user, the algorithm must be designed to accommodate the limitations of computing and human speed.

In a typical AI-human workflow, the human feeds input data into the algorithm, the algorithm runs calculations on that input data and returns an output that predicts a certain outcome or recommends a course of action; the human interprets that information to decide on a course of action, then takes action. The time it takes the algorithm to compute an answer and the time it takes for a human to act on the output are the two largest bottlenecks in this workflow.

For most of AI history, slow computational speeds have severely limited the scope of applied AI. An algorithm’s prediction depends on the input data, and the input data represents a snapshot in time at the moment it was recorded. If the environment described by the data changes faster than the algorithm can compute the input data, by the time the algorithm completes its computations and returns a prediction, the prediction will only describe a moment in the past and will not be actionable. For example, the algorithm behind the music app Shazam may have needed several hours to identify a song after first “hearing” it using the computational power of a Windows 95 computer.

The rise of cloud computing and the development of hardware specially optimized for AI computations has dramatically broadened the scope of areas where applied AI is actionable and affordable. While macro tech advancements can greatly advance applied AI, the algorithm is not totally held hostage to current limits of computation; reinforcement through training also can improve the algorithm’s response time. The more of the same example an algorithm encounters, the more quickly it can skip computations to arrive at a prediction. Thanks to advances in computation and reinforcement, today Shazam takes less than 15 seconds to identify a song.

Automating the decision and action also could help users make use of predictions that decay too quickly to wait for humans to respond. Opsani is one such company using AI to make decisions that are too numerous and fast-moving for humans to make effectively. Unlike human DevOps, who can only move so fast to optimize performance based on recommendations from an algorithm, Opsani applies AI to both identify and automatically improve operations of applications and cloud infrastructure so its customers can enjoy dramatically better performance.

Not all applications of AI can be completely automated, however, if the perceived risk is too high for end users to accept, or if regulations mandate that humans must approve the decision.

“Just right” performance minimums

Just like software startups launch when they have built a minimum viable product (MVP) in order to collect actionable feedback from initial customers, AI startups should launch when they reach the minimum algorithmic performance (MAP) required by early adopters, so that the algorithm can be trained on more diverse and fresh data sets and avoid becoming overfit to a training set.

Most applications don’t require 100 percent accuracy to be valuable. For example, a fraud detection algorithm may only immediately catch five percent of fraud cases within 24 hours of when they occur, but human fraud investigators catch 15 percent of fraud cases after a month of analysis. In this case, the MAP is zero, because the fraud detection algorithm could serve as a first filter in order to reduce the number of cases the human investigators must process. The startup can go to market immediately in order to secure access to the large volume of fraud data used for training their algorithm. Over time, the algorithms’ accuracy will improve and reduce the burden on human investigators, freeing them to focus on the most complex cases.

Startups building algorithms for zero or low MAP applications will be able to launch quickly, but may be continuously looking over their shoulder for copycats, if these copycats appear before the algorithm has reached a high level of performance.

There’s no one-size-fits-all approach to moving an algorithm from the research lab to the market.

Startups attacking low MAP problems also should watch out for problems that can be solved with near 100 percent accuracy with a very small training set, where the problem being modeled is relatively simple, with few dimensions to track and few possible variations in outcome.

AI-powered contract processing is a good example of an application where the algorithm’s performance plateaus quickly. There are thousands of contract types, but most of them share key fields: the parties involved, the items of value being exchanged, time frame, etc. Specific document types like mortgage applications or rental agreements are highly standardized in order to comply with regulation. Across multiple startups, we have seen algorithms that automatically process these documents needing only a few hundred examples to train to an acceptable degree of accuracy before additional examples do little to improve the algorithm, making it easy for new entrants to match incumbents and earlier entrants in performance.

AIs built for applications where human labor is inexpensive and able to easily achieve high accuracy may need to reach a higher MAP before they can find an early adopter. Tasks requiring fine motor skills, for example, have yet to be taken over by robots because human performance sets a very high MAP to overcome. When picking up an object, the AIs powering the robotic hand must gauge an object’s stiffness and weight with a high degree of accuracy, otherwise the hand will damage the object being handled. Humans can very accurately gauge these dimensions with almost no training. Startups attacking high MAP problems must invest more time and capital into acquiring enough data to reach MAP and launch.

Threading the needle

Narrow AI can demonstrate impressive gains in a wide range of applications — in the research lab. Building a business around a narrow AI application, on the other hand, requires a new playbook. This process is heavily dependent on the specific use case on all dimensions, and the performance of the algorithm is merely one starting point. There’s no one-size-fits-all approach to moving an algorithm from the research lab to the market, but we hope these ideas will provide a useful blueprint for you to begin.

https://ift.tt/2KRDlS2 https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

Revcontent is trying to get rid of misinformation with help from Poynter Institute CEO John Lemp recently said that thanks to a new policy, publishers in Revcontent‘s content recommendation network “won’t ever make a cent” on false and misleading stories — at least, not from the network. To achieve this, the company is relying on fact-checking provided by the Poynter Institute’s International Fact Checking Network. If any two independent fact checkers from International Fact Checking flag a story from the Revcontent network as false, the company’s widget will be removed, and Revcontent will not pay out any money on that story (not even revenue earned before the story was flagged). In some ways, Revcontent’s approach to fighting fake news and misinformation sounds similar to the big social media companies — Lemp, like Twitter, has said his company cannot be the “arbiter of truth,” and like Facebook, he’s emphasizing the need to remove the financial incentives for posting sensationalistic-but-misleading stories. However, Lemp (who’s spoken in the past about using content recommendations to help publishers connect to readers and reduce their reliance on individual platforms) criticized the big Internet companies for “arbitrarily” taking down content in response to “bad PR.” In contrast, he said Revcontent will have a fully transparent approach, one that removes the financial rewards for fake news without silencing anyone. Lemp didn’t mention any specific takedowns, but the big story these days is Infowars. It seems like nearly everyone has been cracking down on Alex Jones’ far-right, conspiracy-mongering site, removing at least some Infowars-related accounts and content in the past couple weeks. The Infowars story also raises the question of whether you can effectively fight fake news on a story-by-story basis, rather than completely cutting off publishers when they’ve shown themselves to consistently post misleading or falsified stories. When asked about this, Lemp said Revcontent also has the option to completely removing publishers from the network, but he said he views that as an “last resort.” https://ift.tt/2OzHT1u

August 14, 2018

from Pradodesign Revcontent is trying to get rid of misinformation with help from Poynter Institute

CEO John Lemp recently said that thanks to a new policy, publishers in Revcontent‘s content recommendation network “won’t ever make a cent” on false and misleading stories — at least, not from the network.

To achieve this, the company is relying on fact-checking provided by the Poynter Institute’s International Fact Checking Network. If any two independent fact checkers from International Fact Checking flag a story from the Revcontent network as false, the company’s widget will be removed, and Revcontent will not pay out any money on that story (not even revenue earned before the story was flagged).

In some ways, Revcontent’s approach to fighting fake news and misinformation sounds similar to the big social media companies — Lemp, like Twitter, has said his company cannot be the “arbiter of truth,” and like Facebook, he’s emphasizing the need to remove the financial incentives for posting sensationalistic-but-misleading stories.

However, Lemp (who’s spoken in the past about using content recommendations to help publishers connect to readers and reduce their reliance on individual platforms) criticized the big Internet companies for “arbitrarily” taking down content in response to “bad PR.” In contrast, he said Revcontent will have a fully transparent approach, one that removes the financial rewards for fake news without silencing anyone.

Lemp didn’t mention any specific takedowns, but the big story these days is Infowars. It seems like nearly everyone has been cracking down on Alex Jones’ far-right, conspiracy-mongering site, removing at least some Infowars-related accounts and content in the past couple weeks.

The Infowars story also raises the question of whether you can effectively fight fake news on a story-by-story basis, rather than completely cutting off publishers when they’ve shown themselves to consistently post misleading or falsified stories.

When asked about this, Lemp said Revcontent also has the option to completely removing publishers from the network, but he said he views that as an “last resort.”

https://ift.tt/2OzHT1u https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

“Unhackable” BitFi crypto wallet has been hacked The BitFi crypto wallet was supposed to be unhackable and none other than famous weirdo John McAfee claimed that the device – essentially an Android-based mini tablet – would withstand any attack. Spoiler alert: it couldn’t. First, a bit of background. The $120 device launched at the beginning of this month to much fanfare. It consisted of a device that McAfee claimed contained no software or storage and was instead a standalone wallet similar to the Trezor. The website featured a bold claim by McAfee himself, one that would give a normal security researcher pause: Further, the company offered a bug bounty that seems to be slowly being eroded by outside forces. They asked hackers to pull coins off of a specially prepared $10 wallet, a move that is uncommon in the world of bug bounties. They wrote: We deposit coins into a Bitfi wallet If you wish to participate in the bounty program, you will purchase a Bitfi wallet that is preloaded with coins for just an additional $10 (the reason for the charge is because we need to ensure serious inquiries only) If you successfully extract the coins and empty the wallet, this would be considered a successful hack You can then keep the coins and Bitfi will make a payment to you of $250,000 Please note that we grant anyone who participates in this bounty permission to use all possible attack vectors, including our servers, nodes, and our infrastructure Hackers began attacking the device immediately, eventually hacking it to find the passphrase used to move crypto in and out of the the wallet. In a detailed set of Tweets, security researchers Andrew Tierney and Alan Woodward began finding holes by attacking the operating system itself. However, this did not match the bounty to the letter, claimed BitFi, even though they did not actually ship any bounty-ready devices. Something that I feel should be getting more attention is the fact that there is zero evidence that a #bitfi bounty device was ever shipped to a researcher. They literally created an impossible task by refusing to send the device required to satisfy the terms of the engagement. — Gallagher (@DanielGallagher) August 8, 2018 Then, to add insult injury, the company earned a Pwnies award at security conference Defcon. The award was given for worst vendor response. As hackers began dismantling the device, BitFi went on the defensive, consistently claiming that their device was secure. And the hackers had a field day. One hacker, 15-year-old Saleem Rashid, was able to play Doom on the device. Well, that’s a transaction made with a MitMed Bitfi, with the phrase and seed being sent to a remote machine. That sounds a lot like Bounty 2 to me. pic.twitter.com/qBOVQ1z6P2 — Ask Cybergibbons! (@cybergibbons) August 13, 2018 The hacks kept coming. McAfee, for his part, kept refusing to accept the hacks as genuine. The press claiming the BitFi wallet has been hacked. Utter nonsense. The wallet is hacked when someone gets the coins. No-one got any coins. Gaining root access in an attempt to get the coins is not a hack. It’s a failed attempt. All these alleged “hacks” did not get the coins. — John McAfee (@officialmcafee) August 3, 2018 Unfortunately, the latest hack may have just fulfilled all of BitFi’s requirements. Rashid and Tierney have been able to pull cash out of the wallet by hacking the passphrase, a primary requirement for the bounty. “We have sent the seed and phrase from the device to another server, it just gets sent using netcat, nothing fancy.” Tierney said. “We believe all conditions have been met.” The end state of this crypto mess? BitFi did what most hacked crypto companies do: double down on the threats. In a recently deleted Tweet they made it clear that they were not to be messed with: I haven’t really been following this Bitfi nonsense, but I do so love when companies threaten security researchers. pic.twitter.com/McyBGqM3bt — Matthew Green (@matthew_d_green) August 6, 2018 The researchers, however, may still have the last laugh. Claiming your front door has an unpickable lock does not make your house secure. No more does offering a reward only for defeating that front door lock, and repeatedly saying no one has claimed the reward, prove your house is secure, especially when you’ve left the windows open. — Alan Woodward (@ProfWoodward) August 14, 2018 https://ift.tt/2w7Uphh

August 14, 2018

from Pradodesign “Unhackable” BitFi crypto wallet has been hacked

The BitFi crypto wallet was supposed to be unhackable and none other than famous weirdo John McAfee claimed that the device – essentially an Android-based mini tablet – would withstand any attack. Spoiler alert: it couldn’t.

First, a bit of background. The $120 device launched at the beginning of this month to much fanfare. It consisted of a device that McAfee claimed contained no software or storage and was instead a standalone wallet similar to the Trezor. The website featured a bold claim by McAfee himself, one that would give a normal security researcher pause:

Further, the company offered a bug bounty that seems to be slowly being eroded by outside forces. They asked hackers to pull coins off of a specially prepared $10 wallet, a move that is uncommon in the world of bug bounties. They wrote:

We deposit coins into a Bitfi wallet
If you wish to participate in the bounty program, you will purchase a Bitfi wallet that is preloaded with coins for just an additional $10 (the reason for the charge is because we need to ensure serious inquiries only)
If you successfully extract the coins and empty the wallet, this would be considered a successful hack
You can then keep the coins and Bitfi will make a payment to you of $250,000
Please note that we grant anyone who participates in this bounty permission to use all possible attack vectors, including our servers, nodes, and our infrastructure

Hackers began attacking the device immediately, eventually hacking it to find the passphrase used to move crypto in and out of the the wallet. In a detailed set of Tweets, security researchers Andrew Tierney and Alan Woodward began finding holes by attacking the operating system itself. However, this did not match the bounty to the letter, claimed BitFi, even though they did not actually ship any bounty-ready devices.

Something that I feel should be getting more attention is the fact that there is zero evidence that a #bitfi bounty device was ever shipped to a researcher. They literally created an impossible task by refusing to send the device required to satisfy the terms of the engagement.

— Gallagher (@DanielGallagher) August 8, 2018

Then, to add insult injury, the company earned a Pwnies award at security conference Defcon. The award was given for worst vendor response. As hackers began dismantling the device, BitFi went on the defensive, consistently claiming that their device was secure. And the hackers had a field day. One hacker, 15-year-old Saleem Rashid, was able to play Doom on the device.

Well, that’s a transaction made with a MitMed Bitfi, with the phrase and seed being sent to a remote machine.

That sounds a lot like Bounty 2 to me. pic.twitter.com/qBOVQ1z6P2

— Ask Cybergibbons! (@cybergibbons) August 13, 2018

The hacks kept coming. McAfee, for his part, kept refusing to accept the hacks as genuine.

The press claiming the BitFi wallet has been hacked. Utter nonsense. The wallet is hacked when someone gets the coins. No-one got any coins. Gaining root access in an attempt to get the coins is not a hack. It’s a failed attempt. All these alleged “hacks” did not get the coins.

— John McAfee (@officialmcafee) August 3, 2018

Unfortunately, the latest hack may have just fulfilled all of BitFi’s requirements. Rashid and Tierney have been able to pull cash out of the wallet by hacking the passphrase, a primary requirement for the bounty. “We have sent the seed and phrase from the device to another server, it just gets sent using netcat, nothing fancy.” Tierney said. “We believe all conditions have been met.”

The end state of this crypto mess? BitFi did what most hacked crypto companies do: double down on the threats. In a recently deleted Tweet they made it clear that they were not to be messed with:

I haven’t really been following this Bitfi nonsense, but I do so love when companies threaten security researchers. pic.twitter.com/McyBGqM3bt

— Matthew Green (@matthew_d_green) August 6, 2018

The researchers, however, may still have the last laugh.

Claiming your front door has an unpickable lock does not make your house secure. No more does offering a reward only for defeating that front door lock, and repeatedly saying no one has claimed the reward, prove your house is secure, especially when you’ve left the windows open.

— Alan Woodward (@ProfWoodward) August 14, 2018

https://ift.tt/2w7Uphh https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

Bird and Lime are protesting Santa Monica’s electric scooter recommendations Lime and Bird are protesting recommendations in Santa Monica, Calif. that would prevent the electric scooter companies from operating in the Southern California city. We first saw the news over on Curbed LA, which reported both Lime and Bird are temporarily halting their services in Santa Monica. Last week, Santa Monica’s shared mobility device selection committee recommended the city move forward with Lyft and Uber-owned Jump as the two exclusive scooter operators in the city during the upcoming 16-month pilot program. The committee ranked Lyft and Jump highest due to their experience in the transportation space, staffing strategy, commitments to diversity and equity, fleet maintenance strategies and other elements. Similarly, the committee recommended both Lyft and Jump as bike-share providers in the city. Santa Monica! We’ve taken our fleet offline until 4:30pm locally in order to rally your support in opposition to the council’s recommendation. Don’t let a #LifeWithoutScooters be the future. Help City Hall make the right decision + take action right now: https://t.co/PiuR9pwk4y — Lime (@limebike) August 14, 2018 Now, both Bird and Lime are asking their respective riders to speak out against the recommendations. Bird, which first launched in Santa Monica, has also emailed riders, asking them to tell the city council that they want to Bird to stay. “In a closed-door meeting, a small city-appointed selection committee decided to recommend banning Bird from your city beginning in September,” Bird wrote in an email. “This group inexplicably scored companies with no experience ever operating shared e-scooters higher than Bird who invented this model right here in Santa Monica.” Bird goes on to throw shade at Uber and Lyft — neither of which have operated electric scooter services before. That shade is entirely fair, but one could argue both Uber and Lyft already have so much experience operating transportation services within cities and would be better equipped to run an electric scooter service than a newer company. Santa Monica Shared Mobility Selection Committee In addition to asking people to contact their city officials, Bird is hosting a rally later today at Santa Monica City hall. But given that most of these electric scooters are manufactured by the same provider and that the services are essentially the same, I’d be surprised if there’s much brand loyalty. Over in San Francisco, I personally miss having electric scooters but I really don’t give a rat’s pajamas which services receive permits. That’s just to say, we’ll see if these efforts are effective. I’ve reached out to both Lime and Bird and will update this story if I hear back. Electric scooters are going worldwide https://ift.tt/2vIYoS1

August 14, 2018

from Pradodesign Bird and Lime are protesting Santa Monica’s electric scooter recommendations

Lime and Bird are protesting recommendations in Santa Monica, Calif. that would prevent the electric scooter companies from operating in the Southern California city. We first saw the news over on Curbed LA, which reported both Lime and Bird are temporarily halting their services in Santa Monica.

Last week, Santa Monica’s shared mobility device selection committee recommended the city move forward with Lyft and Uber-owned Jump as the two exclusive scooter operators in the city during the upcoming 16-month pilot program. The committee ranked Lyft and Jump highest due to their experience in the transportation space, staffing strategy, commitments to diversity and equity, fleet maintenance strategies and other elements. Similarly, the committee recommended both Lyft and Jump as bike-share providers in the city.

Santa Monica!
We’ve taken our fleet offline until 4:30pm locally in order to rally your support in opposition to the council’s recommendation. Don’t let a #LifeWithoutScooters be the future. Help City Hall make the right decision + take action right now: https://t.co/PiuR9pwk4y

— Lime (@limebike) August 14, 2018

Now, both Bird and Lime are asking their respective riders to speak out against the recommendations. Bird, which first launched in Santa Monica, has also emailed riders, asking them to tell the city council that they want to Bird to stay.

“In a closed-door meeting, a small city-appointed selection committee decided to recommend banning Bird from your city beginning in September,” Bird wrote in an email. “This group inexplicably scored companies with no experience ever operating shared e-scooters higher than Bird who invented this model right here in Santa Monica.”

Bird goes on to throw shade at Uber and Lyft — neither of which have operated electric scooter services before. That shade is entirely fair, but one could argue both Uber and Lyft already have so much experience operating transportation services within cities and would be better equipped to run an electric scooter service than a newer company.

Santa Monica Shared Mobility Selection Committee

In addition to asking people to contact their city officials, Bird is hosting a rally later today at Santa Monica City hall. But given that most of these electric scooters are manufactured by the same provider and that the services are essentially the same, I’d be surprised if there’s much brand loyalty. Over in San Francisco, I personally miss having electric scooters but I really don’t give a rat’s pajamas which services receive permits. That’s just to say, we’ll see if these efforts are effective.

I’ve reached out to both Lime and Bird and will update this story if I hear back.

Electric scooters are going worldwide

https://ift.tt/2vIYoS1 https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

Smart speaker sales on pace to increase 50 percent by 2019 It seems Amazon didn’t know what it had on its hands when it released the first Echo in late-2014. The AI-powered speaker formed the foundation of the next been moment in consumer electronics. Those devices have helped mainstreaming consumer AI and open the door to wide scale adoption of connected home products. New numbers from NPD, naturally, don’t show any sign of flagging for the category. According to the firm, the devices are set for a 50-percent dollar growth from between 2016-2017 to 2018-2019. The category is projected to add $1.6 billion through next year. The Echo line has grown rapidly over the past four years, with Amazon adding the best-selling Dot and screen enabled products like the Spot and Show. Google, meanwhile, has been breathing down the company’s next with its own Home offerings. The company also recently added a trio of “smart displays” designed by LG, Lenovo and JBL. A new premium category has also arisen, led by Apple’s first entry into the space, the HomePod. Google has similarly offered up the Home Max, and Samsung is set to follow suit with the upcoming Galaxy Home (which more or less looks like a HomePod on a tripod). As all of the above players were no doubt hoping, smart speaker sales also appear to be driving sales of smart home products, with 19 percent of U.S. consumers planning to purchase one within the next year, according to the firm. https://ift.tt/2OAyDdv

August 14, 2018

from Pradodesign Smart speaker sales on pace to increase 50 percent by 2019

It seems Amazon didn’t know what it had on its hands when it released the first Echo in late-2014. The AI-powered speaker formed the foundation of the next been moment in consumer electronics. Those devices have helped mainstreaming consumer AI and open the door to wide scale adoption of connected home products.

New numbers from NPD, naturally, don’t show any sign of flagging for the category. According to the firm, the devices are set for a 50-percent dollar growth from between 2016-2017 to 2018-2019. The category is projected to add $1.6 billion through next year.

The Echo line has grown rapidly over the past four years, with Amazon adding the best-selling Dot and screen enabled products like the Spot and Show. Google, meanwhile, has been breathing down the company’s next with its own Home offerings. The company also recently added a trio of “smart displays” designed by LG, Lenovo and JBL.

A new premium category has also arisen, led by Apple’s first entry into the space, the HomePod. Google has similarly offered up the Home Max, and Samsung is set to follow suit with the upcoming Galaxy Home (which more or less looks like a HomePod on a tripod).

As all of the above players were no doubt hoping, smart speaker sales also appear to be driving sales of smart home products, with 19 percent of U.S. consumers planning to purchase one within the next year, according to the firm.

https://ift.tt/2OAyDdv https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

StarVR’s One headset flaunts eye-tracking and a double-wide field of view While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience. The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators, and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets. That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.) On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic. In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth. To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear. It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks. The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels. Let’s do the math: 16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution. That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better. The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish. Unfortunately the demos of eye-tracking were pretty limited (think a heatmap of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it. One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about. The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option. Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help. https://ift.tt/2MLrt5O

August 14, 2018

from Pradodesign StarVR’s One headset flaunts eye-tracking and a double-wide field of view

While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.

The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators, and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets.

That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.)

On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic.

In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth.

To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear.

It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks.

The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels. Let’s do the math:

16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution.

That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better.

The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish.

Unfortunately the demos of eye-tracking were pretty limited (think a heatmap of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it.

One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about.

The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option.

Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.

https://ift.tt/2MLrt5O https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

Cytera Cellworks aims to bring cell culture automation to your dinner plate Cytera Cellworks hopes to revolutionize the so-called ‘clean meat’ industry through the automation of cell cultures — and that could mean one day, if all goes to plan, the company’s products could be in every grocery store in America. Cytera is a ways off from that happening, though. Founded in 2017 by two college students in the U.K., Ignacio Willats and Ali Afshar, Cytera uses robotic automation to configure cell cultures used in things like growing turkey meat from a petri dish or testing stem cells. The two founders — Willats, the events and startups guy and Afshar the scientist, like to do things differently to better configure the lab as well — like strapping GoPros to lab workers’ heads, for instance. The two came together at the Imperial College of London to run an event for automation in the lab and from there formed their friendship and their company. “At the time, lab automation felt suboptimal,” Afshar told TechCrunch, further explaining he wanted to do something with a higher impact. Cellular agriculture, or growing animal cells in a lab, seems to hit that button and the two are currently enrolled in Y Combinator’s Summer 2018 cohort to help them get to the next step. There’s been an explosion in the lab-made meat industry, which relies on taking a biopsy of animal cells and then growing them in a lab to make the meat versus getting it from an actual living, breathing animal. In just the last couple of years startups like Memphis Meats have started to pop up, offering lab meat to restaurants. Even the company known for its vegan mayo products Hampton Creek (now called Just) is creating a lab-grown foie gras. Originally, the company was going to go for general automation in the lab but had enough interest from clients and potential business in just the cell culture automation aspect they changed the name for clarity. Cytera already has some promising prospects, too, including a leading gene therapy company the two couldn’t name just yet. Of course, automation in the lab is nothing new and big pharma has already poured billions into it for drug discovery. One could imagine a giant pharma company teaming up with a meat company looking to get into the lab-made meat industry and doing something similar but so far Willats and Afshar says they haven’t really seen that happening. They say bigger companies are much more likely to partner with smaller startups like theirs to get the job done. Obviously, there are trade-offs at either end. But, should Cytera make it, you may find yourself eating a chicken breast one day built by a company who bought the cells made in the Cytera lab. https://ift.tt/2Pazl26

August 14, 2018

from Pradodesign Cytera Cellworks aims to bring cell culture automation to your dinner plate

Cytera Cellworks hopes to revolutionize the so-called ‘clean meat’ industry through the automation of cell cultures — and that could mean one day, if all goes to plan, the company’s products could be in every grocery store in America.

Cytera is a ways off from that happening, though. Founded in 2017 by two college students in the U.K., Ignacio Willats and Ali Afshar, Cytera uses robotic automation to configure cell cultures used in things like growing turkey meat from a petri dish or testing stem cells.

The two founders — Willats, the events and startups guy and Afshar the scientist, like to do things differently to better configure the lab as well — like strapping GoPros to lab workers’ heads, for instance. The two came together at the Imperial College of London to run an event for automation in the lab and from there formed their friendship and their company.

“At the time, lab automation felt suboptimal,” Afshar told TechCrunch, further explaining he wanted to do something with a higher impact.

Cellular agriculture, or growing animal cells in a lab, seems to hit that button and the two are currently enrolled in Y Combinator’s Summer 2018 cohort to help them get to the next step.

There’s been an explosion in the lab-made meat industry, which relies on taking a biopsy of animal cells and then growing them in a lab to make the meat versus getting it from an actual living, breathing animal. In just the last couple of years startups like Memphis Meats have started to pop up, offering lab meat to restaurants. Even the company known for its vegan mayo products Hampton Creek (now called Just) is creating a lab-grown foie gras.

Originally, the company was going to go for general automation in the lab but had enough interest from clients and potential business in just the cell culture automation aspect they changed the name for clarity. Cytera already has some promising prospects, too, including a leading gene therapy company the two couldn’t name just yet.

Of course, automation in the lab is nothing new and big pharma has already poured billions into it for drug discovery. One could imagine a giant pharma company teaming up with a meat company looking to get into the lab-made meat industry and doing something similar but so far Willats and Afshar says they haven’t really seen that happening. They say bigger companies are much more likely to partner with smaller startups like theirs to get the job done.

Obviously, there are trade-offs at either end. But, should Cytera make it, you may find yourself eating a chicken breast one day built by a company who bought the cells made in the Cytera lab.

https://ift.tt/2Pazl26 https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized

A Friendly Octopus Found Within Ancient River Pebble Mosaics in Greece Photos: Ephorate of Antiquities of Arta Pebble mosaics dating from the 4th century BC have been unearthed in Greece. During excavations at the Small Theatre of Ancient Amvrakia, the floor of a 12-foot wide bathhouse was revealed. Achaeologists discovered carefully laid mosaics of swans, octopuses, and winged cherubic figures surrounded by a spiral border. Each design was formed using smooth river pebbles in white, off-white, and dark tones, with amber and red pebbles acting as accents. The dig was conducted by the Ephorate of Antiquities, in the town of Arta, which has been occupied on and off since ancient times. According to Archeology News Network, “the pebble floor is linked with a similar one located in an earlier excavation in the 70s and partly covered by the east part of the Small Theatre’s koilon/auditorium. This pebble floor had been removed from the site during the 1976 excavations. It depicts similar scenes with flying cupids, swans and dolphins and at present is in the storerooms of the Archaeological Museum of Arta.” (via The History Blog) https://ift.tt/2vKQyap

August 14, 2018

from Pradodesign A Friendly Octopus Found Within Ancient River Pebble Mosaics in Greece

Photos: Ephorate of Antiquities of Arta

Pebble mosaics dating from the 4th century BC have been unearthed in Greece. During excavations at the Small Theatre of Ancient Amvrakia, the floor of a 12-foot wide bathhouse was revealed. Achaeologists discovered carefully laid mosaics of swans, octopuses, and winged cherubic figures surrounded by a spiral border. Each design was formed using smooth river pebbles in white, off-white, and dark tones, with amber and red pebbles acting as accents. The dig was conducted by the Ephorate of Antiquities, in the town of Arta, which has been occupied on and off since ancient times.

According to Archeology News Network, “the pebble floor is linked with a similar one located in an earlier excavation in the 70s and partly covered by the east part of the Small Theatre’s koilon/auditorium. This pebble floor had been removed from the site during the 1976 excavations. It depicts similar scenes with flying cupids, swans and dolphins and at present is in the storerooms of the Archaeological Museum of Arta.” (via The History Blog)

https://ift.tt/2vKQyap https://ift.tt/1P9I4xH
via IFTTT

Posted in: Uncategorized