Saturday, December 16, 2017

MicroRobots Act Like Insects

Engineers Program Tiny Robots
to Move, Think like Insects
By Syl Kacapyr, Cornell University |

While engineers have had success building tiny, insect-like robots, programming them to behave autonomously like real insects continues to present technical challenges. A group of Cornell engineers has been experimenting with a new type of programming that mimics the way an insect’s brain works, which could soon have people wondering if that fly on the wall is actually a fly.

 
RoboBees manufactured by the Harvard Microrobotics Lab have a 3 centimeter wingspan and weigh only 80 milligrams. Cornell engineers are developing new programming that will make them more autonomous and adaptable to complex environments.

The amount of computer processing power needed for a robot to sense a gust of wind, using tiny hair-like metal probes imbedded on its wings, adjust its flight accordingly, and plan its path as it attempts to land on a swaying flower would require it to carry a desktop-size computer on its back. Silvia Ferrari, professor of mechanical and aerospace engineering and director of the Laboratory for Intelligent Systems and Controls, sees the emergence of neuromorphic computer chips as a way to shrink a robot’s payload.

Unlike traditional chips that process combinations of 0s and 1s as binary code, neuromorphic chips process spikes of electrical current that fire in complex combinations, similar to how neurons fire inside a brain. Ferrari’s lab is developing a new class of “event-based” sensing and control algorithms that mimic neural activity and can be implemented on neuromorphic chips. Because the chips require significantly less power than traditional processors, they allow engineers to pack more computation into the same payload.

Ferrari’s lab has teamed up with the Harvard Microrobotics Laboratory, which has developed an 80-milligram flying RoboBee outfitted with a number of vision, optical flow and motion sensors. While the robot currently remains tethered to a power source, Harvard researchers are working on eliminating the restraint with the development of new power sources. The Cornell algorithms will help make RoboBee more autonomous and adaptable to complex environments without significantly increasing its weight.

“Getting hit by a wind gust or a swinging door would cause these small robots to lose control. We’re developing sensors and algorithms to allow RoboBee to avoid the crash, or if crashing, survive and still fly,” said Ferrari. “You can’t really rely on prior modeling of the robot to do this, so we want to develop learning controllers that can adapt to any situation.”

To speed development of the event-based algorithms, a virtual simulator was created by Taylor Clawson, a doctoral student in Ferrari’s lab. The physics-based simulator models the RoboBee and the instantaneous aerodynamic forces it faces during each wing stroke. As a result, the model can accurately predict RoboBee’s motions during flights through complex environments.

“The simulation is used both in testing the algorithms and in designing them,” said Clawson, who helped has successfully developed an autonomous flight controller for the robot using biologically inspired programming that functions as a neural network. “This network is capable of learning in real time to account for irregularities in the robot introduced during manufacturing, which make the robot significantly more challenging to control.”

Aside from greater autonomy and resiliency, Ferrari said her lab plans to help outfit RoboBee with new micro devices such as a camera, expanded antennae for tactile feedback, contact sensors on the robot’s feet and airflow sensors that look like tiny hairs.

“We’re using RoboBee as a benchmark robot because it’s so challenging, but we think other robots that are already untethered would greatly benefit from this development because they have the same issues in terms of power,” said Ferrari.

One robot that is already benefiting is the Harvard Ambulatory Microrobot, a four-legged machine just 17 millimeters long and weighing less than 3 grams. It can scamper at a speed of .44 meters-per-second, but Ferrari’s lab is developing event-based algorithms that will help complement the robot’s speed with agility.

Ferrari is continuing the work using a four-year, $1 million grant from the Office of Naval Research. She’s also collaborating with leading research groups from a number of universities fabricating neuromorphic chips and sensors.


 

Friday, December 15, 2017

Doing Without "Dark Energy"

Mathematicians propose alternative explanation for cosmic acceleration
By Andy Fell

December 13, 2017 -- Three mathematicians have a different explanation for the accelerating expansion of the universe that does without theories of “dark energy.” Einstein’s original equations for General Relativity actually predict cosmic acceleration due to an “instability,” they argue in paper published recently in Proceedings of the Royal Society A.

About 20 years ago, astronomers made a startling discovery: Not only is the universe expanding — as had been known for decades — but the expansion is speeding up. To explain this, cosmologists have invoked a mysterious force called “dark energy” that serves to push space apart.

Shortly after Albert Einstein wrote his equations for General Relativity, which describe gravity, he included an “antigravity” factor called the “cosmological constant” to balance gravitational attraction and produce a static universe. But Einstein later called the cosmological constant his greatest mistake.

When modern cosmologists began to tackle cosmic acceleration and dark energy, they dusted off Einstein’s cosmological constant as interchangeable with dark energy, given the new knowledge about cosmic acceleration.

That explanation didn’t satisfy mathematicians Blake Temple and Zeke Vogler at the University of California, Davis, and Joel Smoller at the University of Michigan, Ann Arbor.

“We set out to find the best explanation we could come up with for the anomalous acceleration of the galaxies within Einstein’s original theory without dark energy,” Temple said.

The original theory of General Relativity has given correct predictions in every other context, Temple said, and there is no direct evidence of dark energy. So why add a “fudge factor” (dark energy or the cosmological constant) to equations that already appear correct? Instead of faulty equations that need to be tweaked to get the right solution, the mathematicians argue that the equations are correct, but the assumption of a uniformly expanding universe of galaxies is wrong, with or without dark energy, because that configuration is unstable.

An unstable solution


Cosmological models start from a “Friedmann universe,” which assumes that all matter is expanding but evenly distributed in space at every time, Temple said.

Temple, Smoller and Vogler worked out solutions to General Relativity without invoking dark energy. They argue that the equations show that the Friedmann space-time is actually unstable: Any perturbation — for example if the density of matter is a bit lower than average — pushes it over into an accelerating universe.

Temple compares this to an upside-down pendulum. When a pendulum is hanging down, it is stable at its lowest point. Turn a rigid pendulum the other way, and it can balance if it is exactly centered — but any small gust will blow it off.

This tells us that we should not expect to measure a Friedmann universe, because it is unstable, Temple said. What we should expect to measure instead are local space-times that accelerate faster. Remarkably, the local space-times created by the instability exhibit precisely the same range of cosmic accelerations as you get in theories of dark energy, he said.

What this shows is that the acceleration of the galaxies could have been predicted from the original theory of General Relativity without invoking the cosmological constant/dark energy at all, Temple said.

“The math isn’t controversial, the instability isn’t controversial,” Temple said. “What we don’t know is, does our Milky Way galaxy lie near the center of a large under-density of matter in the universe.”

The paper does include testable predictions that distinguish their model from dark energy models, Temple said.

Joel Smoller died in September 2017, while the paper was under review. The work was partially supported by the National Science Foundation.  


 

Thursday, December 14, 2017

Hydrogen-Boron Laser Fusion

Laser-Boron Fusion now a
‘Leading Contender’ for Energy
A laser-driven technique for creating fusion that dispenses with the need for radioactive fuel elements and leaves no toxic radioactive waste is now within reach, says a UNSW physicist.
By Wilson da Silva, University of New South Wales

December 14, 2017 -- Dramatic advances in powerful, high-intensity lasers are making it viable for scientists to pursue what was once thought impossible: creating fusion energy based on hydrogen-boron reactions. And an Australian physicist is in the lead, armed with a patented design and working with international collaborators on the remaining scientific challenges.

In a paper in the scientific journal Laser and Particle Beams, lead author Heinrich Hora from UNSW Sydney and international colleagues argue that the path to hydrogen-boron fusion is now viable, and may be closer to realisation than other approaches, such as the deuterium-tritium fusion approach being pursued by US National Ignition Facility (NIF) and the International Thermonuclear Experimental Reactor under construction in France.

 “I think this puts our approach ahead of all other fusion energy technologies,” said Hora, who predicted in the 1970s that fusing hydrogen and boron might be possible without the need for thermal equilibrium.

Rather than heat fuel to the temperature of the Sun using massive, high-strength magnets to control superhot plasmas inside a doughnut-shaped toroidal chamber (as in NIF and ITER), hydrogen-boron fusion is achieved using two powerful lasers in rapid bursts, which apply precise non-linear forces to compress the nuclei together.

Hydrogen-boron fusion produces no neutrons and, therefore, no radioactivity in its primary reaction. And unlike most other sources of power production – like coal, gas and nuclear, which rely on heating liquids like water to drive turbines – the energy generated by hydrogen-boron fusion converts directly into electricity.

But the downside has always been that this needs much higher temperatures and densities – almost 3 billion degrees Celsius, or 200 times hotter than the core of the Sun.

However, dramatic advances in laser technology are close to making the two-laser approach feasible, and a spate of recent experiments around the world indicate that an ‘avalanche’ fusion reaction could be triggered in the trillionth-of-a-second blast from a petawatt-scale laser pulse, whose fleeting bursts pack a quadrillion watts of power. If scientists could exploit this avalanche, Hora said, a breakthrough in proton-boron fusion was imminent.

“It is a most exciting thing to see these reactions confirmed in recent experiments and simulations,” said Hora, an Emeritus Professor of Theoretical Physics at UNSW. “Not just because it proves some of my earlier theoretical work, but they have also measured the laser-initiated chain reaction to create one billion-fold higher energy output than predicted under thermal equilibrium conditions.”

Together with 10 colleagues in six nations – including from Israel’s Soreq Nuclear Research Centre and the University of California, Berkeley – Hora describes a roadmap for the development of hydrogen-boron fusion based on his design, bringing together recent breakthroughs and detailing what further research is needed to make the reactor a reality.

An Australian spin-off company, HB11 Energy, holds the patents for Hora’s process. “If the next few years of research don’t uncover any major engineering hurdles, we could have a prototype reactor within a decade,” said Warren McKenzie, managing director of HB11.

“From an engineering perspective, our approach will be a much simpler project because the fuels and waste are safe, the reactor won’t need a heat exchanger and steam turbine generator, and the lasers we need can be bought off the shelf,” he added.

Other researchers involved in the study were Shalom Eliezer of Israel’s Soreq Nuclear Research Centre; Jose M. Martinez-Val from Spain’s Polytechnique University in Madrid; Noaz Nissim from University of California, Berkeley; Jiaxiang Wang of East China Normal University; Paraskevas Lalousis of Greece’s Institute of Electronic Structure and Laser; and George Miley at the University of Illinois, Urbana.

Wednesday, December 13, 2017

Ticks Fed From Feathered Dinosaurs

Dinosaur Parasites Trapped in 100-Million-Year-Old Amber Tell Blood-Sucking Story
Amber containing tick grasping a dinosaur feather is first direct fossil evidence of ticks parasitizing dinosaurs
University of Oxford, December 12, 2017 --
  • Fossil discovery shows ticks sucked the blood of feathered dinosaurs almost 100 million years ago
  • Amber containing tick grasping a dinosaur feather is first direct fossil evidence of ticks parasitising dinosaurs
  • New scientific paper also describes new species, Deinocroton draculi or "Dracula's terrible tick", showing further evidence of tick-dinosaur relationship
Fossilised ticks discovered trapped and preserved in amber show that these parasites sucked the blood of feathered dinosaurs almost 100 million years ago, according to a new article published in Nature Communications today.

Sealed inside a piece of 99 million-year-old Burmese amber researchers found a so-called hard tick grasping a feather. The discovery is remarkable because fossils of parasitic, blood-feeding creatures directly associated with remains of their host are exceedingly scarce, and the new specimen is the oldest known to date.

The scenario may echo the famous mosquito-in-amber premise of Jurassic Park, although the newly-discovered tick dates from the Cretaceous period (145-66 million years ago) and will not be yielding any dinosaur-building DNA: all attempts to extract DNA from amber specimens have proven unsuccessful due to the short life of this complex molecule.

"Ticks are infamous blood-sucking, parasitic organisms, having a tremendous impact on the health of humans, livestock, pets, and even wildlife, but until now clear evidence of their role in deep time has been lacking," says Enrique Peñalver from the Spanish Geological Survey (IGME) and leading author of the work.

Cretaceous amber provides a window into the world of the feathered dinosaurs, some of which evolved into modern-day birds. The studied amber feather with the grasping tick is similar in structure to modern-day bird feathers, and it offers the first direct evidence of an early parasite-host relationship between ticks and feathered dinosaurs.

"The fossil record tells us that feathers like the one we have studied were already present on a wide range of theropod dinosaurs, a group which included ground-running forms without flying ability, as well as bird-like dinosaurs capable of powered flight," explains Dr Ricardo Pérez-de la Fuente, a research fellow at Oxford University Museum of Natural History and one of the authors of the study.

"So although we can't be sure what kind of dinosaur the tick was feeding on, the mid-Cretaceous age of the Burmese amber confirms that the feather certainly did not belong to a modern bird, as these appeared much later in theropod evolution according to current fossil and molecular evidence".

The researchers found further, indirect evidence of ticks parasitising dinosaurs in Deinocroton draculi, or "Dracula's terrible tick", belonging to a newly-described extinct group of ticks. This new species was also found sealed inside Burmese amber, with one specimen remarkably engorged with blood, increasing its volume approximately eight times over non-engorged forms. Despite this, it has not been possible to directly determine its host animal.

"Assessing the composition of the blood meal inside the bloated tick is not feasible because, unfortunately, the tick did not become fully immersed in resin and so its contents were altered by mineral deposition," explains Dr Xavier Delclòs, an author of the study from the University of Barcelona and IRBio.

But indirect evidence of the likely host for these novel ticks was found in the form of hair-like structures, or setae, from the larvae of skin beetles (dermestids), found attached to two Deinocroton ticks preserved together. Today, skin beetles feed in nests, consuming feathers, skin and hair from the nest's occupants. And as no mammal hairs have yet been found in Cretaceous amber, the presence of skin beetle setae on the two Deinocroton draculi specimens suggests that the ticks' host was a feathered dinosaur.

"The simultaneous entrapment of two external parasites - the ticks - is extraordinary, and can be best explained if they had a nest-inhabiting ecology as some modern ticks do, living in the host's nest or in their own nest nearby," says Dr David Grimaldi of the American Museum of Natural History and an author of the work.

Together, these findings provide direct and indirect evidence that ticks have been parasitising and sucking blood from dinosaurs within the evolutionary lineage leading to modern birds for almost 100 million years. While the birds were the only lineage of theropod dinosaurs to survive the mass extinction at the end of the Cretaceous 66 million years ago, the ticks did not just cling on for survival, they continued to thrive.

Tuesday, December 12, 2017

Why Meteoroids Blow Up

Research Shows Why Meteoroids
Explode before they Reach Earth
By Kayla Zacharias,  Perdue University

WEST LAFAYETTE, Ind. – December 11, 2017 -- Our atmosphere is a better shield from meteoroids than researchers thought, according to a new paper published in Meteorites & Planetary Science.

When a meteor comes hurtling toward Earth, the high-pressure air in front of it seeps into its pores and cracks, pushing the body of the meteor apart and causing it to explode.

“There’s a big gradient between high-pressure air in front of the meteor and the vacuum of air behind it,” said Jay Melosh, a professor of Earth, Atmospheric and Planetary Sciences at Purdue University and co-author of the paper. “If the air can move through the passages in the meteorite, it can easily get inside and blow off pieces.”

Researchers knew that meteoroids often blew up before they reached the Earth’s surface, but they didn’t know why. Melosh’s team looked to the 2013 Chelyabinsk event, when a meteoroid exploded over Chelyabinsk, Russia, to explain the phenomenon.

The explosion came as a surprise and brought in energy comparable to a small nuclear weapon. When it entered Earth’s atmosphere, it created a bright fire ball. Minutes later, a shock wave blasted out nearby windows, injuring hundreds of people.

The meteoroid weighed around 10,000 tons, but only about 2,000 tons of debris were recovered, which meant something happened in the upper atmosphere that caused it to disintegrate. To solve the puzzle, the researchers used a unique computer code that allows both solid material from the meteor body and air to exist in any part of the calculation.

“I’ve been looking for something like this for a while,” Melosh said. “Most of the computer codes we use for simulating impacts can tolerate multiple materials in a cell, but they average everything together. Different materials in the cell use their individual identity, which is not appropriate for this kind of calculation.”

This new code allowed the researchers to push air into the meteoroid and let it percolate, which lowered the strength of the meteoroid significantly, even if it had been moderately strong to begin with.

While this mechanism may protect Earth’s inhabitants from small meteoroids, large ones likely won’t be bothered by it, he said. Iron meteoroids are much smaller and denser, and even relatively small ones tend to reach the surface.

Monday, December 11, 2017

A New State of Matter

Physicists Excited by Discovery of
New Form of Matter, Excitonium
Abbamonte group achieves first-ever measurement of excitonium collective modes and first observation of soft plasmon in any material
By Siv Schwink, University of Illinois

Excitonium has a team of researchers at the University of Illinois at Urbana-Champaign… well… excited! Professor of Physics Peter Abbamonte and graduate students Anshul Kogar and Mindy Rak, with input from colleagues at Illinois, University of California, Berkeley, and University of Amsterdam, have proven the existence of this enigmatic new form of matter, which has perplexed scientists since it was first theorized almost 50 years ago.

The team studied non-doped crystals of the oft-analyzed transition metal dichalcogenide titanium diselenide (1T-TiSe2) and reproduced their surprising results five times on different cleaved crystals. University of Amsterdam Professor of Physics Jasper van Wezel provided crucial theoretical interpretation of the experimental results.

So what exactly is excitonium?

Excitonium is a condensate—it exhibits macroscopic quantum phenomena, like a superconductor, or superfluid, or insulating electronic crystal. It’s made up of excitons, particles that are formed in a very strange quantum mechanical pairing, namely that of an escaped electron and the hole it left behind.

It defies reason, but it turns out that when an electron, seated at the edge of a crowded-with-electrons valence band in a semiconductor, gets excited and jumps over the energy gap to the otherwise empty conduction band, it leaves behind a “hole” in the valence band. That hole behaves as though it were a particle with positive charge, and it attracts the escaped electron. When the escaped electron with its negative charge, pairs up with the hole, the two remarkably form a composite particle, a boson—an exciton.

In point of fact, the hole’s particle-like attributes are attributable to the collective behavior of the surrounding crowd of electrons. But that understanding makes the pairing no less strange and wonderful.

Why has excitonium taken 50 years to be discovered in real materials?

Until now, scientists have not had the experimental tools to positively distinguish whether what looked like excitonium wasn’t in fact a Peierls phase. Though it’s completely unrelated to exciton formation, Peierls phases and exciton condensation share the same symmetry and similar observables—a superlattice and the opening of a single-particle energy gap.

Abbamonte and his team were able to overcome that challenge by using a novel technique they developed called momentum-resolved electron energy-loss spectroscopy (M-EELS). M-EELS is more sensitive to valence band excitations than inelastic x-ray or neutron scattering techniques. Kogar retrofit an EEL spectrometer, which on its own could measure only the trajectory of an electron, giving how much energy and momentum it lost, with a goniometer, which allows the team to measure very precisely an electron’s momentum in real space.

With their new technique, the group was able for the first time to measure collective excitations of the low-energy bosonic particles, the paired electrons and holes, regardless of their momentum. More specifically, the team achieved the first-ever observation in any material of the precursor to exciton condensation, a soft plasmon phase that emerged as the material approached its critical temperature of 190 Kelvin. This soft plasmon phase is “smoking gun” proof of exciton condensation in a three-dimensional solid and the first-ever definitive evidence for the discovery of excitonium.

“This result is of cosmic significance,” affirms Abbamonte. “Ever since the term ‘excitonium’ was coined in the 1960s by Harvard theoretical physicist Bert Halperin, physicists have sought to demonstrate its existence. Theorists have debated whether it would be an insulator, a perfect conductor, or a superfluid—with some convincing arguments on all sides. Since the 1970s, many experimentalists have published evidence of the existence of excitonium, but their findings weren’t definitive proof and could equally have been explained by a conventional structural phase transition.”

Rak recalls the moment, working in the Abbamonte laboratory, when she first understood the magnitude of these findings: “I remember Anshul being very excited about the results of our first measurements on TiSe2. We were standing at a whiteboard in the lab as he explained to me that we had just measured something that no one had seen before: a soft plasmon.”

“The excitement generated by this discovery remained with us throughout the entire project,” she continues. “The work we did on TiSe2 allowed me to see the unique promise our M-EELS technique holds for advancing our knowledge of the physical properties of materials and has motivated my continued research on TiSe2.”

Kogar admits, discovering excitonium was not the original motivation for the research—the team had set out to test their new M-EELS method on a crystal that was readily available—grown at Illinois by former graduate student Young Il Joe, now of NIST. But he emphasizes, not coincidentally, excitonium was a major interest:

“This discovery was serendipitous. But Peter and I had had a conversation about 5 or 6 years ago addressing exactly this topic of the soft electronic mode, though in a different context, the Wigner crystal instability. So although we didn't immediately get at why it was occurring in TiSe2, we did know that it was an important result—and one that had been brewing in our minds for a few years."

The team’s findings are published in the December 8, 2017 issue of the journal Science in the article, “Signatures of exciton condensation in a transition metal dichalcogenide.”

This fundamental research holds great promise for unlocking further quantum mechanical mysteries: after all, the study of macroscopic quantum phenomena is what has shaped our understanding of quantum mechanics. It could also shed light on the metal-insulator transition in band solids, in which exciton condensation is believed to play a part. Beyond that, possible technological applications of excitonium are purely speculative.

Sunday, December 10, 2017

Basics of a "Store Brand"

 

Store brands are a line of products strategically branded by a retailer within a single brand identity. They bear a similarity to the concept of house brands, private label brands (PLBs) in the United States, own brands in the UK, and home brands in Australia and generic brands. They are distinct in that a store brand is managed solely by the retailer for sale in only a specific chain of store. The retailer will design the manufacturing, packaging and marketing of the goods in order to build on the relationship between the products and the store's customer base. Store-brand goods are generally cheaper than national-brand goods, because the retailer can optimize the production to suit consumer demand and reduce advertising costs. Goods sold under a store brand are subject to the same regulatory oversight as goods sold under a national brand. Consumer demand for store brands might be related to individual characteristics such as demographics and socioeconomic variables.
 


How Store Brands Relate to Customers

A store brand is a way of relating to different customers. Different types of store brands can relate to a customer through the choice of branding and building a relationship with the consumer. Store brands can relate to a consumer through various characteristics such as different demographics.

Store Brand versus National Brand

The store brand is the only brand in which the retailer has the full responsibility of control such as development, sourcing, warehousing, merchandising and marketing. Whereas retailers make different decisions about national brands and leave it up to the manufacturer. With a store brand it is more important for the retailer as it plays a more definite role in the achievement or failure of its own label. This information is based on data from 34 food categories at 106 major supermarket chains, which operate in the largest 50 retail markets in the U.S. (Dhar, S. K., & Hoch, S. J. 1997) Although national brands have long dominated the retail scene, retailers generally use their national brands to draw customers to their stores. Recently department stores, supermarkets, service stations, clothiers and chemists have started to increase more store brands. Studies show that consumers are buying more and more store brands and don’t plan on returning to national brands anytime soon. (Kotler et al. 2013) Store brands are generally cheaper than national brands, which, with consumers becoming more price-conscious and less brand conscious, has increased store brand sales. (Kotler st al. 2013) Some marketers have predicted that store brands will eventually knock out all the strongest national brands.( Kotler et al. 2013) Store brands have a tendency to generate higher margins than national brands. Store brands have previously been known as low-price and low quality brands, but now they are currently positioned as value brands and brands with the aim to have the quality equivalent to manufacturer brands, but with lower prices.

Quality

The quality of a store brand product is not necessarily inferior to that of national brand products, it simply has lower research, development and advertising costs than what national brands incur.

The cost for a store brand product is estimated to be 25% less than national brands (Business insider, 2014). Studies indicate that testing was done on store brands and national brand products and it was found that 33 of the 57 store brand food items tasted as well or better than the national brand products. (Weisbaum, H. 2013) Todd Marks, a senior editor at consumer reports stated that products may be equal in quality but have different flavour profile based on ingredients or recipes.

Some examples of differences of taste in store brand products compared to national brand products include when Walmart’s Great Value vanilla ice cream was rated almost equal to the Breyers variant, when all seven store-brand cashews rated better than Emerald cashews, and when the national brand and Trader Joe's mixed vegetables were rated crisper and fresher than Birds Eye, among many examples. Results like this have the potential to spur a big change in the grocery shopping habits of customers and what they purchase.

Advantages of Private Branding

  • Private labels offer retailers control over products factors such as size, package design, production and distribution.
  • Gain market share over national brands.
  • Logos and taglines can be customised to the customers shopping experience.
  • Store labels can shape shoppers in store experience.
  • Retailers have more control on decisions of store brand products.
  • Category gaps that haven’t been filled by national brands can be done.
  • National brands have competition and need to be innovative.
  • Store brands have increased revenue on local and a regional level, which is contributing to an overall positive economic outlook.
  • Product images are increased due to more competition.
  • Specialised marketing plans towards a store brand.
  • Store brands quality is now increasing due to higher demand.
  • The better the store brand usage, the better the direct effect of store brand value for money on store brand loyalty.

                              https://en.wikipedia.org/wiki/Store_brand