1. Po raz pierwszy odwiedzasz EDU. LEARN

    Odwiedzasz EDU.LEARN

    Najlepszym sposobem na naukę języka jest jego używanie. W EDU.LEARN znajdziesz interesujące teksty i videa, które dadzą Ci taką właśnie możliwość. Nie przejmuj się - nasze filmiki mają napisy, dzięki którym lepiej je zrozumiesz. Dodatkowo, po kliknięciu na każde słówko, otrzymasz jego tłumaczenie oraz prawidłową wymowę.

    Nie, dziękuję
  2. Mini lekcje

    Podczas nauki języka bardzo ważny jest kontekst. Zdjęcia, przykłady użycia, dialogi, nagrania dźwiękowe - wszystko to pomaga Ci zrozumieć i zapamiętać nowe słowa i wyrażenia. Dlatego stworzyliśmy Mini lekcje. Są to krótkie lekcje, zawierające kontekstowe slajdy, które zwiększą efektywność Twojej nauki. Są cztery typy Mini lekcji - Gramatyka, Dialogi, Słówka i Obrazki.

    Dalej
  3. Wideo

    Ćwicz język obcy oglądając ciekawe filmiki. Wybierz temat, który Cię interesuje oraz poziom trudności, a następnie kliknij na filmik. Nie martw się, obok każdego z nich są napisy. A może wcale nie będą Ci one potrzebne? Spróbuj!

    Dalej
  4. Teksty

    Czytaj ciekawe artykuły, z których nauczysz się nowych słówek i dowiesz więcej o rzeczach, które Cię interesują. Podobnie jak z filmikami, możesz wybrać temat oraz poziom trudności, a następnie kliknąć na wybrany artykuł. Nasz interaktywny słownik pomoże Ci zrozumieć nawet trudne teksty, a kontekst ułatwi zapamiętanie słówek. Dodatkowo, każdy artykuł może być przeczytany na głos przez wirtualnego lektora, dzięki czemu ćwiczysz słuchanie i wymowę!

    Dalej
  5. Słowa

    Tutaj możesz znaleźć swoją listę "Moje słówka", czyli funkcję wyszukiwania słówek - a wkrótce także słownik tematyczny. Do listy "Moje słówka" możesz dodawać słowa z sekcji Videa i Teksty. Każde z słówek dodanych do listy możesz powtórzyć później w jednym z naszych ćwiczeń. Dodatkowo, zawsze możesz iść do swojej listy i sprawdzić znaczenie, wymowę oraz użycie słówka w zdaniu. Użyj naszej wyszukiwarki słówek w części "Słownictwo", aby znaleźć słowa w naszej bazie.

    Dalej
  6. Lista tekstów

    Ta lista tekstów pojawia się po kliknięciu na "Teksty". Wybierz poziom trudności oraz temat, a następnie artykuł, który Cię interesuje. Kiedy już zostaniesz do niego przekierowany, kliknij na "Play", jeśli chcesz, aby został on odczytany przez wirtualnego lektora. W ten sposób ćwiczysz umiejętność słuchania. Niektóre z tekstów są szczególnie interesujące - mają one odznakę w prawym górnym rogu. Koniecznie je przeczytaj!

    Dalej
  7. Lista Video

    Ta lista filmików pojawia się po kliknięciu na "Video". Podobnie jak w przypadku Tekstów, najpierw wybierz temat, który Cię interesuje oraz poziom trudności, a następnie kliknij na wybrane video. Te z odznaką w prawym górnym rogu są szczególnie interesujące - nie przegap ich!

    Dalej
  8. Dziękujemy za skorzystanie z przewodnika!

    Teraz już znasz wszystkie funkcje EDU.LEARN! Przygotowaliśmy do Ciebie wiele artykułów, filmików oraz mini lekcji - na pewno znajdziesz coś, co Cię zainteresuje!

    Teraz zapraszamy Cię do zarejestrowania się i odkrycia wszystkich możliwości portalu.

    Dziękuję, wrócę później
  9. Lista Pomocy

    Potrzebujesz z czymś pomocy? Sprawdź naszą listę poniżej:
    Nie, dziękuję

Już 62 365 użytkowników uczy się języków obcych z Edustation.

Możesz zarejestrować się już dziś i odebrać bonus w postaci 10 monet.

Jeżeli chcesz się dowiedzieć więcej o naszym portalu - kliknij tutaj

Jeszcze nie teraz

lub

Poziom:

Wszystkie

Nie masz konta?

Blaise Aguera y Arcas demos Photosynth


Poziom:

Temat: Media

What I'm going to show you first, as quickly as I can,
is some foundational work, some new technology
that we brought to Microsoft as part of an acquisition
almost exactly a year ago. This is Seadragon.
And it's an environment in which you can either locally or remotely
interact with vast amounts of visual data.
We're looking at many, many gigabytes of digital photos here
and kind of seamlessly and continuously zooming in,
panning through the thing, rearranging it in any way we want.
And it doesn't matter how much information we're looking at,
how big these collections are or how big the images are.
Most of them are ordinary digital camera photos,
but this one, for example, is a scan from the Library of Congress,
and it's in the 300 megapixel range.
It doesn't make any difference
because the only thing that ought to limit the performance
of a system like this one is the number of pixels on your screen
at any given moment. It's also very flexible architecture.
This is an entire book, an example of non-image data.
This is Bleak House by Dickens. Every column is a chapter.
To prove to you that it's really text, and not an image,
we can do something like so, to really show
that this is a real representation of the text; it's not a picture.
Maybe this is a kind of an artificial way to read an e-book.
I wouldn't recommend it.
This is a more realistic case. This is an issue of The Guardian.
Every large image is the beginning of a section.
And this really gives you the joy and the good experience
of reading the real paper version of a magazine or a newspaper,
which is an inherently multi-scale kind of medium.
We've also done a little something
with the corner of this particular issue of The Guardian.
We've made up a fake ad that's very high resolution --
much higher than you'd be able to get in an ordinary ad --
and we've embedded extra content.
If you want to see the features of this car, you can see it here.
Or other models, or even technical specifications.
And this really gets at some of these ideas
about really doing away with those limits on screen real estate.
We hope that this means no more pop-ups
and other kind of rubbish like that -- shouldn't be necessary.
Of course, mapping is one of those really obvious applications
for a technology like this.
And this one I really won't spend any time on,
except to say that we have things to contribute to this field as well.
But those are all the roads in the U.S.
superimposed on top of a NASA geospatial image.
So let's pull up, now, something else.
This is actually live on the Web now; you can go check it out.
This is a project called Photosynth,
which really marries two different technologies.
One of them is Seadragon
and the other is some very beautiful computer vision research
done by Noah Snavely, a graduate student at the University of Washington,
co-advised by Steve Seitz at U.W.
and Rick Szeliski at Microsoft Research. A very nice collaboration.
And so this is live on the Web. It's powered by Seadragon.
You can see that when we kind of do these sorts of views,
where we can dive through images
and have this kind of multi-resolution experience.
But the spatial arrangement of the images here is actually meaningful.
The computer vision algorithms have registered these images together,
so that they correspond to the real space in which these shots --
all taken near Grassi Lakes in the Canadian Rockies --
were taken. So you see elements here
of stabilized slide-show or panoramic imaging,
and these things have all been related spatially.
I'm not sure if I have time to show you any other environments.
There are some that are much more spatial.
I would like to jump straight to one of Noah's original data-sets --
and this is from an early prototype of Photosynth
that we first got working in the summer --
to show you what I think
is really the punchline behind this technology,
the Photosynth technology. And it's not necessarily so apparent
from looking at the environments that we've put up on the website.
We had to worry about the lawyers and so on.
This is a reconstruction of Notre Dame Cathedral
that was done entirely computationally
from images scraped from Flickr. You just type Notre Dame into Flickr,
and you get some pictures of guys in t-shirts, and of the campus
and so on. And each of these orange cones represents an image
that was discovered to belong to this model.
And so these are all Flickr images,
and they've all been related spatially in this way.
And we can just navigate in this very simple way.
(Applause)
You know, I never thought that I'd end up working at Microsoft.
It's very gratifying to have this kind of reception here.
(Laughter)
I guess you can see
this is lots of different types of cameras:
it's everything from cell phone cameras to professional SLRs,
quite a large number of them, stitched
together in this environment.
And if I can, I'll find some of the sort of weird ones.
So many of them are occluded by faces, and so on.
Somewhere in here there are actually
a series of photographs -- here we go.
This is actually a poster of Notre Dame that registered correctly.
We can dive in from the poster
to a physical view of this environment.
What the point here really is is that we can do things
with the social environment. This is now taking data from everybody --
from the entire collective memory
of, visually, what the Earth looks like --
and link all of that together.
All of those photos become linked together,
and they make something emergent
that's greater than the sum of the parts.
You have a model that emerges of the entire Earth.
Think of this as the long tail to Stephen Lawler's Virtual Earth work.
And this is something that grows in complexity
as people use it, and whose benefits become greater
to the users as they use it.
Their own photos are getting tagged with meta-data
that somebody else entered.
If somebody bothered to tag all of these saints
and say who they all are, then my photo of Notre Dame Cathedral
suddenly gets enriched with all of that data,
and I can use it as an entry point to dive into that space,
into that meta-verse, using everybody else's photos,
and do a kind of a cross-modal
and cross-user social experience that way.
And of course, a by-product of all of that
is immensely rich virtual models
of every interesting part of the Earth, collected
not just from overhead flights and from satellite images
and so on, but from the collective memory.
Thank you so much.
(Applause)
Chris Anderson: Do I understand this right? That what your software is going to allow,
is that at some point, really within the next few years,
all the pictures that are shared by anyone across the world
are going to basically link together?
BAA: Yes. What this is really doing is discovering.
It's creating hyperlinks, if you will, between images.
And it's doing that
based on the content inside the images.
And that gets really exciting when you think about the richness
of the semantic information that a lot of those images have.
Like when you do a web search for images,
you type in phrases, and the text on the web page
is carrying a lot of information about what that picture is of.
Now, what if that picture links to all of your pictures?
Then the amount of semantic interconnection
and the amount of richness that comes out of that
is really huge. It's a classic network effect.
CA: Blaise, that is truly incredible. Congratulations.
BAA: Thanks so much.
Mobile Analytics