Trending December 2023 # Long Short Term Memory (Lstm): Digging A Bit Deeper # Suggested January 2024 # Top 17 Popular

You are reading the article Long Short Term Memory (Lstm): Digging A Bit Deeper updated in December 2023 on the website Katfastfood.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Long Short Term Memory (Lstm): Digging A Bit Deeper

This article was published as a part of the Data Science Blogathon.

 Introduction on Long Short Term Memory

Have you seen “Memento” or there is on m short-term memory problems. LSTM is a special version of RNN which solves this problem of RNN through the concept of gates.

Sequence prediction problems are considered one of the most challenging problems for a long time. They include a wide range of issues from predicting the next word of the sentence to finding patterns in stock market data, from understanding movie plots to recognizing your way of speaking. As we have already discussed the t LSTM works on the concept of gates. There are 3  main gates in LSTM named Forget gate, Input Gate and Output Gate.

In the previous article, we have already discussed how Long Short Term Memory solves the limitations of RNNS, in this article, we will understand all the gates in detail and will look at a simple implementation of this Algorithm.

Basic Idea Behind LSTM

Long Short Term Memory or LSTM’s are capable of learning long-term dependencies. This algorithm was first introduced by Hochreiter and Schmidhuber in 1997. LSTM work well on a wide variety of sequential problems and now they are widely used in the industry. Remembering the context of the sentence for a long period of time is their default behavior, not something they struggle to learn.

Recurrent neural networks have a very simple structure, such as a single tan-h layer:

On the other hand, Long Short Term Memory  also has a chain-like structure but they are slightly different from RNN. They work on the concepts of Gate.

Don’t go haywire with this architecture, we will break it down into simpler steps which will make this a piece of cake to grab. LSTM has mainly 3 gates

Forget Gate

Input Gate

Output Gate

Let’s look at all these gates closely and try to understand the nitty-gritty details behind all these gates.

Forget Gate

Let’s take an example of a text prediction problem.

As soon as ‘while’ is encountered, the forget gate realizes that there may be a change in the subject of the sentence or the context of the sentence. Therefore, the subject of the sentence is forgotten and the spot for the subject is vacated. When we start speaking about “Chirag” this position of the subject is allocated to “Chirag”.

All the irrelevant information is removed via the multiplication of a filter. This is necessary for optimizing the execution of the LSTM network.

The previous hidden state Ht-1 and the current event vector Xt are joined together and multiplied with the weight matrix with an add-on of the Sigmoid activation and some bias, which generates the probability scores. Since a sigmoid activation is being applied, the output will always be 0 or 1. If the output is 0 for the particular value in the cell state, it means that forget gate wants to forget that information. Similarly, a 1 means that it wants to remember the information. Now, this vector output from the sigmoid function is multiplied by the cell state.

Here:

Xt is the current input vector

Uf is the weight associated with the input

Ht-1 is the previous hidden state

Wf is the weight associated with the hidden state

Bf is the bias added to it

If we take weight common, then this formula can be written as:

Now let’s look at the equation of memory cell state which is:

If you see here the forget gate is multiplied by the cell state of the previous timestamp and this is an element-wise multiplication. Whenever we have 1 in the forget gate matrix it will retain the value and whenever we have 0 in the forget gate then it will eliminate that information. Let’s take an example:

Whenever our model sees Mr. Watson, it will retain that information or that context in the cell state C1 . Now C will be a matrix, right? So some of its value will retain the information that we have Mr. Watson here and now the forget gate will not allow the information to change and that is why the information will retain throughout. In RNN this information would have changed quickly as our model sees the next word. Now when we encounter some other word let’s say Ms. Mary then now the forget gate will forget the information of Mr. Watson and will add the information of Ms. Mary. This adding of new information is done by another gate known as the Input gate which we will study next.

Input Gate

To get the intuition behind this let’s take another example:

Here the important information is that “Prince” was feeling nauseous and that’s why he couldn’t enjoy the party. The fact that he told all this over the phone is not important and can be ignored. This technique of adding some new knowledge/information can be done via the input gate. It is basically used to quantify the importance of the new information carried by the input.

The equation of the Input gate is:

Here:

Xt is the current input vector at timestamp t

Ui is the weight matrix associated with the input

Ht-1 is the previous hidden state

Wi is the weight associated with the hidden state

bi is the bias added to it

If we take weight common, then this formula can be written as:

Now we need to understand the candidate’s value. Let’s look at the memory cell state equation again:

Where is the candidate value represented by:

The equation of candidate value is very similar to the simple recurrent neural network. This candidate value will be responsible for adding new information and as its name suggest “Candidate value” that it’s a candidate value which means potential new information we can add, and that potential new information will be filtered by this input gate.

Since we are using the sigmoid activation function in the input gate, we know that the output of the input gate will be in the range of 0-1, and thus it will filter out what new information to add since the candidate value has tan-h activation function that means its value will range between 1 and -1. If the value is negative, then the information is subtracted from the cell state and if it is positive then the information is added to the cell state at the current timestamp. Let’s take an example.

Now clearly instead of ‘her’, it should be ‘his’, and in place of ‘was,’ it should be ‘will’ because if we look at the context, the reader is talking about the future. So whenever our model reaches this word “tomorrow” it also needs to add that information along with whatever information it had before. So this is an example where we do not need to forget anything but we are just adding new information while retaining the useful information that we had before thus this model has the capacity to retain the old information for a long time along with adding information.

Output Gate

Not all the information flowing through the memory cell state is fit for being output. Let’s visualize this with an example:

In this sentence, there could be a number of outputs for the blank space. If a human being read this phrase, then he would guess that the last word ‘brave’ is an adjective that is used to describe a noun, and hence he/she will guess that the answer must be a noun. Thus, the appropriate output for this blank would be ‘Bob’.

The job of selecting useful information from the memory cell state is done via the output gate. Let’s have a look at the equation of the output gate:

Where Uo and Wo are the weights assigned to the input vector and hidden state respectively. The value of the output gate is also between 0 and 1 since we are using the sigmoid activation function. To calculate the hidden state we will use the output gate and memory cell state:

Where Ot is the matrix of output gate and Ct is the cell state matrix.

The overall architecture of LSTM looks like this:

You must be wondering how these gates get to know which information to forget and which to add, and the answers lie in the weight matrices. Remember that we will train this model and after training the values of these weights will be updated in such a way that it will build a certain level of understanding, which means that after training our model will develop an understanding of which information is useful and which information is irrelevant and this understanding is built by looking at thousands and thousands of data.

An Implementation is Necessary

Let’s build a model that can predict some n number of characters after the original text of Macbeth. The original text can be found here. A revised edition of the .txt file can be found here.

We

Importing Libraries # Importing dependencies numpy and keras import numpy from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import LSTM from keras.utils import np_utils Loading Text File #load text filename = "macbeth.txt" text = (open(filename).read()).lower() # mapping characters with integers unique_chars = sorted(list(set(text))) char_to_int = {} int_to_char = {} for i, c in enumerate (unique_chars): char_to_int.update({c: i}) int_to_char.update({i: c})

In the above code, we read the ‘.txt’ file from our system. In unique_chars we have all the unique characters in the text. In char_to_int all the unique characters in the text have been assigned a number since the computer can not understand the word so we need to convert it into machine language. This is done to make the computation part easier

Preparing the Dataset #preparing input and output dataset  X = [] Y = [] for i in range(0,n-50,1): sequence = text[I:i+50] label = text[i+50] X.append([char_to_int[char] for char in s]) Y.append(char_to_int[label])

We will prepare our model in such a way that if we want to predict ‘O’ in ‘HELLO’ then we would feel [‘H’,’E’,’L’,’L’] as input and we will get [‘O’] as an output. Similarly, here we fix the length of the window that we want (set to 50 in the example) and then save the encodings of the first 49 characters in X and the expected output i.e. the 50th character in Y.

Reshaping of X #reshaping, normalizing and one hot encoding  X_modified = numpy.reshape(X,(len(X),50,1)) X_modified = X_modified/float(len(ujique_chars)) Y_modified = np_utils.to_categorical(Y)

Getting the input in the correct shape is the most crucial part of this implementation. We need to understand how LSTM accepts the input, and in what shape it needs our input to be. If you check out the documentation then you would find that it takes input in [Samples, Time-steps, Features] where samples are the number of data points we have, time steps are the window size (how many words do you want to look back in order to predict the next word), features refers to the number of variables we have for the corresponding actual value in Y.

Scaling the values is a very crucial part while building a model and hence we scale the values in X_modified between 0-1 and one hot encode out true values in Y_modified.

Defining the LSTM Model # defining the LSTM model model = Sequential() model.add(LSTM(300,input_shape=(X_modified.shape[1],X_modified.shape[2]),return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(300)) model.add(Dropout(0.2)) model.add(Dense(Y_modified.shape[1],activation = 'softmax')) model.summary()

In the first layer, we take 300 memory units and we also use hyperparameter return sequences which ensures that the next layer receives sequences and not just random data. Look at the hyperparameter ‘input_shape’ here I passed time-steps and a number of features and by default, it will select all the data points available. If you are still confused then please print the shape of X_modified and you’ll understand what I am trying to say. Then I am using A Dropout layer here to avoid overfitting. Lastly, we have a fully connected layer with a ‘softmax’ activation function and neurons equal to a number of unique characters.

Fitting the Model and Making Predictions # fitting the model model.fit(X_modified, Y_modified, epochs=200, batch_size=40) # picking a random seed start_index = numpy.random.randint(0, len(X)-1) predict_string = X[start_index] # generating characters for i in range(50): x = numpy.reshape(predict_string, (1, len(predict_string), 1)) x = x / float(len(unique_chars)) #predicting pred_index = numpy.argmax(model.predict(x, verbose=0)) char_out = int_to_char[pred_index] seq_in = [int_to_char[value] for value in predict_string] print(char_out) predict_string.append(pred_index) predict_string = predict_string[1:len(predict_string)]

The model is fit over 200 epochs, with a batch size of 40. For prediction, we are first randomly taking a start index. We put that random sentence in a variable named ‘predict_string’. We are then reshaping get from the model gives out the char encoding of the predicted character, which means the output will be numerical so we decode it back to the character value and then append it to the pattern. Eventually, after enough training epochs, it will give better and better results over time. This is how you would use LSTM to solve a sequence prediction task.

Conclusion on Long Short Term Memory

LSTMs or Long-short term memory is a special case of RNN which tries to solve the problems faced by RNN. The problems like long-term dependencies, vanishing, and exploding gradient problems are solved by Long Short Term Memory very easily with the help of gates. It has mainly 3 gates called Forget Gate, Input Gate, and Output Gate. Forget gate is used to forget unnecessary information. The input gate is used to add new important information to the model. The to this, the information isn’t updated quickly unlike RNNs.

Below are some key takeaways from the article:

Long Short Term Memory or LSTM is used for sequential data like time series data, audio data, etc.

Long Short Term Memory or LSTM outperforms the other models when we want our model to learn from long-term dependencies.

It solves the problems faced by RNN (Vanishing and exploding gradient problems).

It works on the concepts of the gate (Forget gate, Input gate, and Output gate).

2 states are passed on to the next time stamp which are the hidden state and cell state.

Long Short Term Memory’s ability to forget, remember and update the information pushes it one step ahead of RNNs.

Connect with me on LinkedIn and Twitter

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

You're reading Long Short Term Memory (Lstm): Digging A Bit Deeper

Dogecoin Vs Pikamoon, Which Should Investors Hold For Long Term Profits?

It’s no longer a secret that crypto is one of the few industries that presents investors with exciting investment opportunities. To most investors now, the word crypto brings to mind astronomical returns, which sometimes come with a downside.

Speaking of long-term profitability, analysts are heavily debating which of these two top meme coins is best for investors to invest in for long-term growth and profit. The debate has been stiff, to say the least, and experts and analysts are sharing their one cent on which they believe is best.

In this article, we’ll delve into the worlds of both Pikamoon and Doge to unravel their intricacies and utility, future price predictions, and why experts think one is better than the other. 

Doge vs. Pikamoon: Doge Price Prediction For 2023

Dogecoin has dropped over 15% in the last 30 days and over 90% from its all-time high price. This continuous drop indicates that Dogecoin is in a downtrend, and things may get worse if prices don’t recover quickly enough.

Reports from Coinmarketcap show that the number of investors holding $100k+ worth of DOGE has reduced from 4,000 as of January 10 to 3,900 as of June 1. This exit of Dogecoin whales shows that a lot of investors are dissatisfied with the performance of the token. 

Experts believe the Dogecoin price has the potential to recover and rally as high as $0.17 before the end of the year. Since Dogecoin is only a dog themed meme coin with no particular use case, experts believe it will require a very strong catalyst to make this happen.

Pikamoon, on the other hand, isn’t just a meme coin; it is also a P2E crypto project whose native token, $PIKA, has massive real-life utility. This is why many analysts refer to Pikamoon as a hybrid token. In addition to this, the $PIKA token is a deflationary token that is designed to increase in value over time, which makes it a token with major potential for future growth.  

This is why many veteran analysts believe Pikamoon is a better option for long-term investment and profit. This explains why many Dogecoin and other meme coin investors are making a quick switch to Pikamoon for consistent, long-term growth in their portfolios. 

What is Pikamoon?

Pikamoon is a leading P2E that is designed to revolutionize the gaming industry with its diverse gameplay, unique storyline, and unbeatable reward system for gamers. To make all of this and more possible, the Pikamoon team took the high road of sparring no single dollar in creating what gamers and investors will come to know as the greatest P2E of all time.

The Pikamoon game is developed on Unreal Engine 5. This is the same gaming engine that Fortnite is built on. So if you’re ever in doubt about how spectacular the Pikaverse is, just remember how much fun you and your buddies had while playing Fortnite. Guess what? There’s more. The Pikamoon team ensured no stone was left unturned when it came down to securing the future of the Pikamoon project. They ensured all measures were put in place to guarantee continued innovation in the Pikaverse and the entire Pikamoon platform. 

To achieve this, the Pikamoon team diverted 10 billion tokens of the total 50 billion minted $PIKA tokens for the continued innovation and sustenance of the Pikaverse. This is why the Pikaverse will continue to thrive for years to come, and investors will see consistent, long-term growth in their portfolios. 

Pikamoon Presale: The Easiest Way To Flip Your Portfolio In 2023.

The Pikamoon presale is arguably one of the biggest events going on in the crypto industry, and every smart crypto investor is filling their portfolios with $PIKA tokens. A number of top analysts are speculating that the price of the Pikamoon token may reach as high as 100,000% after launch on major exchanges.

Investors are taking no chances as Pikamoon is in the last phase of its presale and tokens are selling for as little as $0.0006. This is the lowest the $PIKA token will ever sell for again, and all early investors are gearing up for the big boom.

Don’t miss out on the biggest crypto moves of 2023. Get on the Pikamoon presale now!

Find out more about Pikamoon (PIKA):

Push Notifications Via Chrome Are Great, But Complicate Things A Bit

Push notifications via Chrome are great, but complicate things a bit

Last week, Google updated Chrome to version 42 (beta). In that update, there was a small (possibly overlooked) feature that brings push notifications to your phone for websites than enable the feature. The update is part of Google’s Fizz initiative, which aims to bridge the wide gap between native mobile apps and web apps for mobile. That same update is creating a bit of buzz, but also confusing the matter of apps a bit, as push notifications for mobile websites are now available — just like native apps.

In a blog post today, Google is announcing they’ve already got partners pushing notifications via Chrome. Sites like Product Hunt or Pinterest will now shovel pop-up notifications to your lock screen or notification bar, all aimed to give you the same timely info a native app might.

With icons and push notifications, Google is blurring lines for mobile and native apps, but could be creating confusion for users. Should you already have the native Pinterest app, you’d potentially end up with two icons on your screen; native and web.

Notifications are also a bit more convoluted. Two icons means two notifications, possibly for the same info. It also means two settings, and two different way to manage your notifications.

There’s also no means to control duplicate notifications. Handy, the move asks that you make a choice between mobile and native, if one exists. Should your favorite site not have a native app, this becomes pretty handy.

Discretion will be the better part of the mobile web should Google’s mobile web initiatives take hold, and there’s no reason to think they won’t. App icons remaining sticky and push notifications are two big pulls for native app development over a mobile-friendly website.

To that, Google is also readying a change to their search algorithm that will demote non-mobile-friendly sites. It’s a catch-all move; the site has to be mobile friendly to get ranked, and Chrome mobile is ready to make it more app-like. Clever.

This will also have a broad reach in emerging markets, where the web is set to dominate the landscape, at least for the immediate future. There’s also no clumsy app store to navigate to find apps. The Internet is your app store. Also clever.

The update is technically beta, which is positive. It needs work. Right now, there is no way to quiet the noise. All notifications are treated equally, so a new article on VICE will get the same treatment as someone on Pinterest pinning something new. You can’t get a ‘ping’ for VICE and a quiet background message for Pinterest.

But it’s also added bulk to a lumbering ship. Chrome is already pretty heavy; power and CPU consumptive. It’s definitely a performer, that Chrome browser, but man does it need a lot of fuel.

For now, it’s a forward-thinking approach form Google’s Chrome team. A good one, too. Google is still the only one using this push notification standard, so we’ll have time to see who follows them down the rabbit hole. More (like Mozilla and Safari) probably should, but not at the expense of native apps.

That will be the difficult part.

Experience Local Cultures On A Deeper Level With These Five Travel Apps

Travelers today are passionate about immersing themselves in local cultures. They’re looking for authentic experiences in every place they visit, and many seek to deepen their knowledge while broadening their worldview. There are a number of travel-related apps out there, but only a few are well-suited for those who want to forge a deep connection with the places they go.

Get these apps downloaded on your mobile before you hit the road.

Visit A City

Browsing hundreds of travel blogs and articles to find out what to do in a new city is time-consuming. With Visit A City, you’ll get free travel guides to major cities around the world. Whether you’re headed to Paris or Bangkok, this app will provide ready-made, multiple-day itineraries—including details about how to travel from A to B.

You’ll also see information about a city’s religious sites, historic landmarks, local markets, museums, and restaurants. For many popular spots, you’ll be able to access all necessary information (its website, address, contact numbers, and opening hours) within the app itself. It also ranks sights and restaurants by popularity and tells you the approximate time travelers usually spend in a given location.

For offline use, download the customizable travel itineraries whenever you have Wi-Fi. The app also offers organized tours and day trips, so if those interest you, you can book them easily with your credit card.

Visit A City is free for Android and iOS.

Maps.me

Having access to public transport routes and turn-by-turn navigation is quite handy in a new destination. But there’s often limited connectivity when we travel. That’s where chúng tôi shines, as its maps work even when you’re completely offline. The app uses data from OpenStreetMap—information contributed by map enthusiasts, locals, and travelers—and covers the entire world.

Before you lose internet service, tap the name of your destination country or city on the world map. A bar will appear with a download icon, which will enable you to save the map for offline use. Once you’ve got the maps you want, you’ll be able to search them for nearby places by name and category, even without Wi-Fi or a cellular connection. That means easy navigating to shops, attractions, banks, and hospitals.

Many popular map apps like Google Maps give you options for public transport. But when you are in a remote location or hiking through the rural countryside with little or no access to public transport, you’ll need route information. It’s easy to get lost in a remote location that’s completely new to you. If there are bicycle, walking, or hiking paths nearby, chúng tôi will show you. There’s also offline turn-by-turn navigation, which is handy in offbeat locations, but it’ll drain your battery a little faster.

chúng tôi is free for Android and iOS.

Eventbrite

Joining workshops, open mic events, art exhibitions, or food tastings in a new destination is a great way to experience the local culture and lifestyle. But we often don’t know where to look for exciting events. Enter: Eventbrite. No matter what your tastes, you should be able to find something fun in its vast, user-driven event directory: It’s got guitar contests, fundraisers, book fairs, tech conferences, and more.

Eventbrite is free for Android and iOS.

Duolingo

Be it ordering food, taking public transport, or shopping for groceries, knowing a few phrases in the local language of an unfamiliar destination will help you deal with things more easily. It’s also a great way to connect with locals on the go. We’ve all used Google Translate or iTranslate to quickly convert words, phrases, or sentences from one language to another. What these apps do not offer are language lessons.

Duolingo does, and it makes learning fun.

Its lessons are tailored to personal language acquisition levels, meaning it uses your answers to identify areas you need to improve and offers further similar exercises. If you are completely new to a language, you’ll start with basic lessons. If not, you’ll take a placement test that will determine your fluency level.

Pick a language—there are modules for 22—and within a few minutes, the app will have taught you some common words and phrases. Duolingo also has speaking exercises, which are a great way to quickly grasp some useful day-to-day lingo. It allows you to speak a sentence or a phrase displayed on the app and detects whether your pronunciation was on point.

Duolingo is free for Android and iOS.

As you book your accommodations, consider chúng tôi which will help you stay in a local home while you’re abroad. Apps such as Airbnb offer the opportunity to rent an entire space, but with Homestay, your hosts are always present and you stay in their home. Staying with host families is an immersive cultural experience, and many will even include breakfast—a great way to try a new cuisine or home-cooked food during your trip.

The app is easy-to-use and pretty straightforward: Simply search your destination to pick a homestay. Before you book, though, you’ll have to contact the host to check their availability. You can contact multiple hosts at a time by adding them to the “Contact Hosts” list. To do so, tap the star button next to a listing. When a host confirms your stay, you’ll have to pay 15 percent of the total payment of the accommodation via the app.

The downside is that there’s no iOS app, and we weren’t able to find a similar app for Apple’s operating system, either.

chúng tôi is free for Android.

A Roller Coaster That’ll Leave You Weightless For Eight Long Seconds

Kingda Ka, the tallest roller coaster on Earth, drops its passengers a life-flashing 418 feet. Ferrari World’s Formula Rossa, the fastest, literally takes riders’ breath away at speeds of up to 150 mph. Though thrilling, these are phenomena of degree, not kind. BRC Imagination Arts, a Southern California design firm, has proposed something entirely new: a ride that creates the sensation of zero gravity for up to eight seconds at a time.

BRC drew its concept from the “Vomit Comet,” the plane NASA uses to train astronauts. The KC-135A aircraft flies a looping parabolic path, creating about 25 seconds of microgravity each time it zips up and over the parabola’s camelback hump. BRC’s proposed theme-park ride would travel a somewhat simpler trajectory—up and then back down a soaring steel edifice, similar to the existing “Superman: Escape from Krypton” coaster at Six Flags Magic Mountain in California. But unlike Superman and other open-car coasters, the vomit-comet ride would be fully enclosed. Rather than the thrill of hurtling forward to one’s perceived doom, riders would enjoy the illusion of floating within a stable chamber.

To create that illusion, a linear induction motor system would speed coasters up the track with unprecedented precision. As the coaster approached a top speed of more than 100 mph, it would suddenly and ever so slightly decelerate—just enough to throw the passengers up from their seats, like stones from a catapult—and then quickly adjust its speed to fly in formation with and around the passengers. (The ride’s calculations would correspond to the unique heft of any particular group.) As the coaster reached the top of the track and began to drop back down, the computer system would continue to match its speed to that of the falling passengers, extending the sensation of weightlessness for several additional seconds, and finally rapidly decelerate to a stop back at the base station.

Roller coasters typically cost no more than $30 million, but Bob Rogers, BRC’s founder and chief creative officer, says the zero-gravity ride would cost $50 million or more, in large part because the precision-response propulsion system is so complex. But if someone were to write a check today, Rogers says, his company could be sending riders on weightless journeys by the end of 2013—and the new owners could make money on the side by renting the coaster after hours to scientists who wanted to perform the tests they now run using NASA’s original Vomit Comet. Simply by heading over to the amusement park, they too will be able to experience the equivalent of eight seconds in outer space—which, Rogers says, “will feel like forever.”

Inside The Ride

INSIDE THE RIDE

Passengers would enter the coaster through gull-wing doors, face forward, and sit upright, six to 16 to a car. They would buckle into simple two-point restraints, but the belts would be kept slightly slack so they would have room to rise out of their seats. Once aboard, they could remove from a small stand in front of their seats one of several tethered “scientific packages”—a cup filled with water, a ball, a gyroscope. In addition to experiencing weightlessness, they would be able to observe how the selected object changed properties in zero gravity. The cars would also be outfitted with drains. Should the ride live up to its nauseous namesake, attendants at the unloading stations would be ready with hoses.

Lexar Gaming Memory Cards Outed

Solutions Enhance Game-Playing Experience, Allow Users to Save Valuable Data and Multimedia Content on Leading Gaming Consoles

Lexar Gaming Line Key Messages:

Provides users with high-capacity storage for gaming-related content on Sony® and Nintendo® gaming consoles

Gives gamers a reliable way to store music, movies, photos, videos, and Internet favorites on their video game platforms

Delivers reliable performance to ensure valuable gaming data is safely downloaded and saved on popular memory card formats

Offered in a wide range of form factors for multiple gaming consoles

Fremont, CA, November 19, 2009 — Lexar Media, a leading global provider of memory products for digital media, today announced the expansion of its line of gaming solutions with the Lexar® Gaming Memory Stick Micro™ (M2™) and Gaming Secure Digital High Capacity (SDHC™) memory cards. Designed to take the gaming experience to the next level, the cards enable users to reliably save and store gaming data and multimedia content on popular gaming consoles. With the introduction of the new Gaming memory cards, Lexar is expanding its leadership in high-performance digital media gaming solutions to improve the overall video game experience for the growing number of consumers playing and competing on popular gaming consoles.

Available in 4GB and 8GB capacities, the Lexar Gaming M2 and Lexar Gaming SDHC memory cards give gamers the ability to store critical gaming data and information, such as cheat codes, checkpoints, custom settings, high scores, bonus materials, and player profiles. The new Gaming memory cards also enable gamers to store and manage games and downloaded multimedia content.

“Gamers live by their saved gaming data, and everyone from casual to hard-core gamers rely on that information to be available every time they power up their consoles,” said Manisha Sharma, director of worldwide memory card product marketing, Lexar Media. “Since most gamers play more than one title at a time and use their consoles for Web browsing and photo, video, and music storage, they need memory products that can hold lots of content. The Lexar Gaming M2 and SDHC cards give even the most passionate gamers a way to reliably store their video game information, and help enhance the overall playing experience.”

The new Lexar Gaming M2 card is compatible with Sony PSP®go and other Sony M2 devices. Lexar also offers the Lexar Gaming Memory Stick PRO Duo™ card for Sony PlayStation® Portable (PSP®) and PlayStation 3. The new Lexar Gaming SDHC card is designed for use with Nintendo® Wii™ and Nintendo DSi™, and also gives Nintendo DSi users the ability to snap and store photos with the device’s two built-in cameras, play back songs and music on their gaming device, and store games downloaded from the Nintendo DSi Shop.* Capacities of the Lexar gaming solutions include:

Lexar Gaming Memory Stick Micro (M2) card: 4GB and 8GB

Lexar Gaming Memory Stick PRO Duo card: 2GB, 4GB, and 8GB

Lexar Gaming SDHC card: 4GB and 8GB

Lexar Gaming SD™ card: 2GB

The Lexar Gaming line of memory cards provides consumers with the reliability they have come to expect from Lexar, backed by Micron’s industry-leading technology. All Lexar products are tested in the Lexar Reliability Lab to ensure quality, performance, and compatibility. Lexar is proud to provide customers with a higher level of confidence in their gaming memory, whether they are storing important gaming data on their handheld gaming device, or downloading and saving gaming content on their consoles. All Lexar Gaming line products include a five-year limited warranty.

*Wii Menu 4.0 required

Update the detailed information about Long Short Term Memory (Lstm): Digging A Bit Deeper on the Katfastfood.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!