1. Po raz pierwszy odwiedzasz EDU. LEARN

    Odwiedzasz EDU.LEARN

    Najlepszym sposobem na naukę języka jest jego używanie. W EDU.LEARN znajdziesz interesujące teksty i videa, które dadzą Ci taką właśnie możliwość. Nie przejmuj się - nasze filmiki mają napisy, dzięki którym lepiej je zrozumiesz. Dodatkowo, po kliknięciu na każde słówko, otrzymasz jego tłumaczenie oraz prawidłową wymowę.

    Nie, dziękuję
  2. Mini lekcje

    Podczas nauki języka bardzo ważny jest kontekst. Zdjęcia, przykłady użycia, dialogi, nagrania dźwiękowe - wszystko to pomaga Ci zrozumieć i zapamiętać nowe słowa i wyrażenia. Dlatego stworzyliśmy Mini lekcje. Są to krótkie lekcje, zawierające kontekstowe slajdy, które zwiększą efektywność Twojej nauki. Są cztery typy Mini lekcji - Gramatyka, Dialogi, Słówka i Obrazki.

    Dalej
  3. Wideo

    Ćwicz język obcy oglądając ciekawe filmiki. Wybierz temat, który Cię interesuje oraz poziom trudności, a następnie kliknij na filmik. Nie martw się, obok każdego z nich są napisy. A może wcale nie będą Ci one potrzebne? Spróbuj!

    Dalej
  4. Teksty

    Czytaj ciekawe artykuły, z których nauczysz się nowych słówek i dowiesz więcej o rzeczach, które Cię interesują. Podobnie jak z filmikami, możesz wybrać temat oraz poziom trudności, a następnie kliknąć na wybrany artykuł. Nasz interaktywny słownik pomoże Ci zrozumieć nawet trudne teksty, a kontekst ułatwi zapamiętanie słówek. Dodatkowo, każdy artykuł może być przeczytany na głos przez wirtualnego lektora, dzięki czemu ćwiczysz słuchanie i wymowę!

    Dalej
  5. Słowa

    Tutaj możesz znaleźć swoją listę "Moje słówka", czyli funkcję wyszukiwania słówek - a wkrótce także słownik tematyczny. Do listy "Moje słówka" możesz dodawać słowa z sekcji Videa i Teksty. Każde z słówek dodanych do listy możesz powtórzyć później w jednym z naszych ćwiczeń. Dodatkowo, zawsze możesz iść do swojej listy i sprawdzić znaczenie, wymowę oraz użycie słówka w zdaniu. Użyj naszej wyszukiwarki słówek w części "Słownictwo", aby znaleźć słowa w naszej bazie.

    Dalej
  6. Lista tekstów

    Ta lista tekstów pojawia się po kliknięciu na "Teksty". Wybierz poziom trudności oraz temat, a następnie artykuł, który Cię interesuje. Kiedy już zostaniesz do niego przekierowany, kliknij na "Play", jeśli chcesz, aby został on odczytany przez wirtualnego lektora. W ten sposób ćwiczysz umiejętność słuchania. Niektóre z tekstów są szczególnie interesujące - mają one odznakę w prawym górnym rogu. Koniecznie je przeczytaj!

    Dalej
  7. Lista Video

    Ta lista filmików pojawia się po kliknięciu na "Video". Podobnie jak w przypadku Tekstów, najpierw wybierz temat, który Cię interesuje oraz poziom trudności, a następnie kliknij na wybrane video. Te z odznaką w prawym górnym rogu są szczególnie interesujące - nie przegap ich!

    Dalej
  8. Dziękujemy za skorzystanie z przewodnika!

    Teraz już znasz wszystkie funkcje EDU.LEARN! Przygotowaliśmy do Ciebie wiele artykułów, filmików oraz mini lekcji - na pewno znajdziesz coś, co Cię zainteresuje!

    Teraz zapraszamy Cię do zarejestrowania się i odkrycia wszystkich możliwości portalu.

    Dziękuję, wrócę później
  9. Lista Pomocy

    Potrzebujesz z czymś pomocy? Sprawdź naszą listę poniżej:
    Nie, dziękuję

Już 62 369 użytkowników uczy się języków obcych z Edustation.

Możesz zarejestrować się już dziś i odebrać bonus w postaci 10 monet.

Jeżeli chcesz się dowiedzieć więcej o naszym portalu - kliknij tutaj

Jeszcze nie teraz

lub

Poziom:

Wszystkie

Nie masz konta?

Tan Le: A headset that reads your brainwaves


Poziom:

Temat: Nauka i technologia

Up until now, our communication with machines
has always been limited
to conscious and direct forms.
Whether it's something simple
like turning on the lights with a switch,
or even as complex as programming robotics,
we have always had to give a command to a machine,
or even a series of commands,
in order for it to do something for us.
Communication between people on the other hand,
is far more complex and a lot more interesting,
because we take into account
so much more than what is explicitly expressed.
We observe facial expressions, body language,
and we can intuit feelings and emotions
from our dialogue with one another.
This actually forms a large part
of our decision-making process.
Our vision is to introduce
this whole new realm of human interaction
into human-computer interaction,
so that computers can understand
not only what you direct it to do,
but it can also respond
to your facial expressions
and emotional experiences.
And what better way to do this
than by interpreting the signals
naturally produced by our brain,
our center for control and experience.
Well, it sounds like a pretty good idea,
but this task, as Bruno mentioned,
isn't an easy one for two main reasons:
First, the detection algorithms.
Our brain is made up of
billions of active neurons,
around 170,000 km
of combined axon length.
When these neurons interact,
the chemical reaction emits an electrical impulse
which can be measured.
The majority of our functional brain
is distributed over
the outer surface layer of the brain.
And to increase the area that's available for mental capacity,
the brain surface is highly folded.
Now this cortical folding
presents a significant challenge
for interpreting surface electrical impulses.
Each individual's cortex
is folded differently,
very much like a fingerprint.
So even though a signal
may come from the same functional part of the brain,
by the time the structure has been folded,
its physical location
is very different between individuals,
even identical twins.
There is no longer any consistency
in the surface signals.
Our breakthrough was to create an algorithm
that unfolds the cortex,
so that we can map the signals
closer to its source,
and therefore making it capable of working across a mass population.
The second challenge
is the actual device for observing brainwaves.
EEG measurements typically involve
a hairnet with an array of sensors,
like the one that you can see here in the photo.
A technician will put the electrodes
onto the scalp
using a conductive gel or paste
and usually after a procedure of preparing the scalp
by light abrasion.
Now this is quite time consuming
and isn't the most comfortable process.
And on top of that, these systems
actually cost in the tens of thousands of dollars.
So with that, I'd like to invite onstage
Evan Grant, who is one of last year's speakers,
who's kindly agreed
to help me to demonstrate
what we've been able to develop.
(Applause)
So the device that you see
is a 14-channel, high-fidelity
EEG acquisition system.
It doesn't require any scalp preparation,
no conductive gel or paste.
It only takes a few minutes to put on
and for the signals to settle.
It's also wireless,
so it gives you the freedom to move around.
And compared to the tens of thousands of dollars
for a traditional EEG system,
this headset only costs
a few hundred dollars.
Now on to the detection algorithms.
So facial expressions --
as I mentioned before in emotional experiences --
are actually designed to work out of the box
with some sensitivity adjustments
available for personalization.
But with the limited time we have available,
I'd like to show you the cognitive suite,
which is the ability for you
to basically move virtual objects with your mind.
Now, Evan is new to this system,
so what we have to do first
is create a new profile for him.
He's obviously not Joanne -- so we'll "add user."
Evan. Okay.
So the first thing we need to do with the cognitive suite
is to start with training
a neutral signal.
With neutral, there's nothing in particular
that Evan needs to do.
He just hangs out. He's relaxed.
And the idea is to establish a baseline
or normal state for his brain,
because every brain is different.
It takes eight seconds to do this.
And now that that's done,
we can choose a movement-based action.
So Evan choose something
that you can visualize clearly in your mind.
Evan Grant: Let's do "pull."
Tan Le: Okay. So let's choose "pull."
So the idea here now
is that Evan needs to
imagine the object coming forward
into the screen.
And there's a progress bar that will scroll across the screen
while he's doing that.
The first time, nothing will happen,
because the system has no idea how he thinks about "pull."
But maintain that thought
for the entire duration of the eight seconds.
So: one, two, three, go.
Okay.
So once we accept this,
the cube is live.
So let's see if Evan
can actually try and imagine pulling.
Ah, good job!
(Applause)
That's pretty amazing.
(Applause)
So we have a little bit of time available,
so I'm going to ask Evan
to do a really difficult task.
And this one is difficult
because it's all about being able to visualize something
that doesn't exist in our physical world.
This is "disappear."
So what you want -- at least with movement-based actions,
we do that all the time, so you can visualize it.
But with "disappear," there's really no analogies.
So Evan, what you want to do here
is to imagine the cube slowly fading out, okay.
Same sort of drill. So: one, two, three, go.
Okay. Let's try that.
Oh, my goodness. He's just too good.
Let's try that again.
EG: Losing concentration.
(Laughter)
TL: But we can see that it actually works,
even though you can only hold it
for a little bit of time.
As I said, it's a very difficult process
to imagine this.
And the great thing about it is that
we've only given the software one instance
of how he thinks about "disappear."
As there is a machine learning algorithm in this --
(Applause)
Thank you.
Good job. Good job.
(Applause)
Thank you, Evan, you're a wonderful, wonderful
example of the technology.
So as you can see before,
there is a leveling system built into this software
so that as Evan, or any user,
becomes more familiar with the system,
they can continue to add more and more detections,
so that the system begins to differentiate
between different distinct thoughts.
And once you've trained up the detections,
these thoughts can be assigned or mapped
to any computing platform,
application or device.
So I'd like to show you a few examples,
because there are many possible applications
for this new interface.
In games and virtual worlds, for example,
your facial expressions
can naturally and intuitively be used
to control an avatar or virtual character.
Obviously, you can experience the fantasy of magic
and control the world with your mind.
And also, colors, lighting,
sound and effects,
can dynamically respond to your emotional state
to heighten the experience that you're having, in real time.
And moving on to some applications
developed by developers and researchers around the world,
with robots and simple machines, for example --
in this case, flying a toy helicopter
simply by thinking lift with your mind.
The technology can also be applied
to real world applications --
in this example, a smart home.
You know, from the user interface of the control system
to opening curtains
or closing curtains.
And of course also to the lighting --
turning them on
or off.
And finally,
to real life-changing applications
such as being able to control an electric wheelchair.
In this example,
facial expressions are mapped to the movement commands.
Man: Now blink right to go right.
Now blink left to turn back left.
Now smile to go straight.
TL: We really -- Thank you.
(Applause)
We are really only scratching the surface of what is possible today.
And with the community's input,
and also with the involvement of developers
and researchers from around the world,
we hope you can help us to shape
where the technology goes from here. Thank you so much.
Mobile Analytics